The adventure of chess programming (3)

by Frederic Friedel
2/18/2019 – In the first two sections of this series, originally published in Der Spiegel, Europe's largest (and most influential) news portal, Frederic Friedel described his decades-long involvement in computer chess. He was commissioned to make the story personal, and in part three he describes his encounter with the latest twist in this area of research: artificial intelligence and self-learning machines that are playing at the very highest levels of the game. Where are we headed and what will the future bring?

ChessBase 18 - Mega package ChessBase 18 - Mega package

Winning starts with what you know
The new version 18 offers completely new possibilities for chess training and analysis: playing style analysis, search for strategic themes, access to 6 billion Lichess games, player preparation by matching Lichess games, download Chess.com games with built-in API, built-in cloud engine and much more.

More...

This article is reproduced with kind permission of Spiegel Online, where it first appeared. The author was told to make the series personal, describe the development of chess programming not as an academic treatise but as a personal story of how he had experienced it. For some ChessBase readers a number of the passages will be familiar, since the stories have been told before on our pages. For that we apologize. For others this can serve as a roadmap through one of the great scientific endeavors of our time.

The Adventure of Chess Programming (3)

By Frederic Friedel

It was the mid 1990s. I was in London, accompanying World Chess Champion Garry Kasparov, as I often did, on one of his appearances. This time it was in Home House, a beautiful Georgian villa in Marylebone, and one evening we were joined at dinner by a former child prodigy in chess. He had reached master level (Elo 2300+) at the age of 13 and captained a number of English junior chess teams. He was also a world-class computer games player. It was an interesting encounter, with the lad enthusiastically describing a computer game he was developing. After he left I said to Garry: "That's a cocky young fellow!" "But very smart," Garry replied. And we left it at that.

Twenty years later I read in the news that Google had purchased a company called DeepMind Technologies, for £400 million. DeepMind was a British artificial intelligence enterprise which had created neural network software that learned how to play early-gen video games like Pong and Space Invaders, all on its own. It was not hand-programmed to do this, but used methods that were very like those of a human player gaining proficiency in the game. The goal, DeepMind said, was "to create a general-purpose AI that can be useful and effective for almost anything." One of the founders of the company was Demis Hassabis.

Demis? Wait a minute, wasn't that the lad we had met in Home House? For a year I watched the progress the company made as a member of the Google family, and was especially fascinated to see how they solved a problem that had needled computer experts for decades: DeepMind created a program, AlphaGo, that learned to play the ancient game of Go, taking it all the way to master and then world championship level. The rules of Go are deceptively simple, but the branching factor makes it very hard for computers to calculate. In the first article of this series [link] I described how in a 40-move game of chess there were 10^128 possible sequences of moves – vastly more than the number of atoms in the universe. Well, in Go there are 10^170 possible board configurations, which dwarfs the number of chess games to insignificance.

We followed the progress of AlphaGo closely on the news page of ChessBase, which shares with DeepMind an affinity for capitalising in the middle of names. The program used deep neural networks to study a very large number of games, developing its own understanding of what human play looks like. After that it honed its skills by playing different versions against itself while learning from its mistakes. This process, known as reinforcement learning, produced a master-level Go playing software.

More than twenty years after the first encounter Garry Kasparov discusses artificial Intelligence with Demis Hassabis in this 40-minute highly enlightening Google Talk.

At this stage I contacted Demis, who remembered our encounter in Home House and invited me to visit DeepMind in London. My counter-proposal: his team should come to Hamburg to see the assets we have for chess. ChessBase has over eight million high-class games, 100 thousand annotated by very strong players, 200 million chess positions in the cloud, with the evaluations of the world's most powerful computers attached to each of them, the largest and most up-to-date "live" openings book in the game, etc., etc. DeepMind could use this data to train a neural network for chess – more accurately: have the neural network train itself to play the game.

Demis was open to the idea and promised to consider it. What he did not tell me at the time was that they were already developing a chess engine that was unlike anything anyone had ever seen before. Traditional engines have their knowledge of the game of chess programmed into them, meticulously, one factor at a time. The DeepMind neural network took a radically different path: it was told the rules of the game, how the pieces move and the ultimate goal of checkmate. Nothing else. Using state-of-the-art techniques in artificial intelligence, the program, AlphaZero, played against itself, millions upon millions of times, identifying patterns of its own accord, and adjusting the values as it saw fit. In other words, it produced its own concepts and knowledge, using pattern recognition just as humans do, and improving as it learned. And it did this without the need for all the ChessBase data I was offering.

How was this possible? Initially the system played absurd games, where one side gives up three pieces for nothing, and the other side cannot win because it had lost four pieces. But with each iteration, with each 10,000 or so learning steps, it became stronger. Running on the latest proprietary hardware – for the technology savant: 5,000 first-generation and 64 second-generation TPUs – the program played 44 million games against itself and, in the process, rose to the level of world class chess strength. Nobody had told AlphaZero anything about strategy, nobody had explained that material was important, that queens were more valuable than bishops, that mobility mattered. It had worked everything out by itself, drawing its own conclusions – conclusions, incidentally, that no human being will ever be able to comprehend.

In the end AlphaZero played a test match against an open source engine named Stockfish, one of the top three or four brute force engines in the world. These programs all hover around 3500 points on the rating scale, which is at least 700 more than any human player. Stockfish ran on 64 processor threads and looked at 70 million positions per second; AlphaZero ran on a machine with four TPUs, looking at just 80,000 positions per second. It compensated for this thousand-fold disadvantage by selectively searching only the most promising variations – moves that in its self-play had proved to be effective in similar positions.

In the 100 games that were played against Stockfish, AlphaZero won 25 as white, three as black, and drew the remaining 72 games. All games were played without recourse to an openings book. In addition a series of twelve 100-game matches were played, starting from the 12 most popular human openings. AlphaZero won 290, drew 886 and lost 24 games. Some in the traditional computer chess community call the match conditions "unfair" (no opening books or only constrained openings), but I conclude that without doubt AlphaZero is the strongest entity that has ever played chess. And it had become this after studying the game, from scratch, all alone without any external advice, for a total of about nine hours.

Fred 'n Demis – reunion after two decades, at the World Chess Championship in London, November 2018

Google and DeepMind were quite relaxed about the project and revealed the methods they used to all and gentry. One of the project managers even came to visit ChessBase in Hamburg and held a talk for half a dozen of our talented young programmers. They went away inspired, determined to learn more about this kind of computer intelligence.

The Woosh

Of course I myself could not resist. In mid November I asked my son Tommy and nephew Noah to build me a powerful computing machine. They bought the components, consisting of a 12-core processor and two state-of-the-art graphics cards that had just been released. These cards have thousands of graphic and tensor core processor units (GPUs and TPU), originally intended to power 3D video display in games. But it turns out that the processors are eminently suited for neural network calculation.

So now I have a very powerful AI machine humming in my home office. Humming? Actually it is a fairly loud whirring sound of multiple fans dissipating the heat from the 600 watts of energy the computer consumes. That heats the room to a very comfortable 23°C, with central heating turned off. Actually you get used to the steady woosh of the machine. There is one interesting thing to consider: if I had had this machine around the year 2000 it would have been the most powerful computer in the world!

What do we do with the super-machine? A friend who is an expert on computer chess uploaded all the tools needed to build a neural network for chess, and the machine went to work playing an average 95,000 games per day against itself, learning from them and from other games. In a few months, we hope, it will reach the AlphaZero level of play and maybe even go further. It already is able to stand up to top brute force programs, some running on massive hardware at 1.6 billion positions per seconds.

All this is exquisitely exciting, not just because our AI program may advance to new superhuman levels of chess playing strength. More important is that it does this in a completely new way, not with brute force tactics but with positional ideas that it has come up with, after studying millions of games. All by itself, with no human intervention.

And that is not the whole story. The techniques used by DeepMind are not only applicable to chess. One can use neural networks to learn all kinds of things – recognise images, faces, handwriting; process natural language; calculate motion (e.g. for advanced computer games or robots); understand economy and stock markets, making better predictions than human experts; and many other things that are coming in the next decade. These young programmers want to understand how their field is being transformed by the transition from explicit hand-coding to unsupervised learning by computers which, in many areas, are already doing a better job than humans.

AlphaZero is just an early example of computers solving complex problems without human intervention. It has demonstrated in striking fashion that this is possible – and, we must conclude, not just for Go and chess. We are going to see the same process take place in many other areas of human endeavour. It is the future of mankind, and we would do well to be prepared for it.

Previous articles in this series

The Adventure of Chess Programming (Part 1)
Did you know that the first chess program was written by Alan Turing a few years before the first computers were built. The first chess program to actually run on a machine was MANIAC, written in 1951 by the atomic bomb scientists in Los Alamos. Fifty years later computers were challenging world champions, and today it is pointless for any human to play against a digital opponent.

The adventure of chess programming (Part 2)
How do computers play chess, how do they "think"? The author discusses the very, very big numbers involved in looking ahead at all possible continuations. Unfortunately the effort to prune the search tree and only look at plausible lines failed, while advances in hardware and software development led to the triumph of the "brute force" method.

Originally published in Spiegel Online (in German): Siege durch brutale Gewalt.

About the author: Frederic Friedel, 73, studied Philosophy, Science and Linguistics in Hamburg and Oxford. As a science journalist he was employed by the national TV channels ZDF and ARD, and in this capacity already in the 1970s he reported on computer chess. In 1987 Friedel founded, together with Matthias Wüllenweber, the ChessBase, a company that today is one of the world's largest producers of chess software. It is also a cooperation partner of the SPIEGEL. Friedel lives in Hamburg, Germany.


Editor-in-Chief emeritus of the ChessBase News page. Studied Philosophy and Linguistics at the University of Hamburg and Oxford, graduating with a thesis on speech act theory and moral language. He started a university career but switched to science journalism, producing documentaries for German TV. In 1986 he co-founded ChessBase.

Discuss

Rules for reader comments

 
 

Not registered yet? Register

celeje celeje 2/20/2019 02:27
@ Frederic & @dumkopf:

I don't want this to be some huge argument, but "solve" in the context of chess (which is the context of this page and the whole website) only means the technical meaning. It is not esoteric at all.

In the "this policy could solve the town's housing crisis" sense, neural networks have been "solving" problems for many, many decades.
Frederic Frederic 2/19/2019 07:29
@RoselleDragon: "Articles like this make me want to live for ever." Well, maybe you should read this article, which resulted from a discussion with Nigel Short and Hou Yifan: https://medium.com/@frederic_38110/gerontology-how-long-will-you-live-4c9fd2704377
Frederic Frederic 2/19/2019 07:27
@dumkof (incidentally where does this name come from?): Thanks for defending my use of the word "solve". It means to "find an answer to, explanation for, or means of effectively dealing with (a problem or mystery)", as in
"this policy could solve the town's housing crisis". To solve means to find an/the answer to, find a/the solution to, answer, resolve, work out, puzzle out, fathom, find the key to, decipher, decode, break, clear up, interpret, translate, straighten out, get to the bottom of, make head or tail of, unravel, disentangle, untangle, unfold, piece together, explain, expound, elucidate. It can also, occasionally, mean to work out every possible outcome of a logical system, but that is a very esoteric technical use of the word.
Frederic Frederic 2/19/2019 07:21
@soikins: DeepMind showed that it was possible for a machine to comprehend chess and play it as well or better than anyone or anything had ever done before, and to do this without any algorithmic instructions by humans. This is one of the very first substantial examples of this kind of machine leaning. Now DeepMind is not that interested in showing exactly how much better AlphaZero is than other programs. They have demonstrated and tested the matter in principle. Currently they are folding proteins better than any human or hand-programmed computer. This is of immense importance to medicine and human well-being, arguably more so than playing a rematch against Stockfish. Let other programmers forge on in this enterprise.
celeje celeje 2/19/2019 02:42
@dumkopf:

Neural networks have been around for MANY, MANY decades.

A0 does not "solve" anything, because "solve" has the technical meaning that you end up with the PERFECT answer, not just something supposedly "good".
soikins soikins 2/19/2019 12:15
@dumkopf

DeepMind is already pursuing other goals (last I saw AlphaZero beating StarCraft pros) and ir probably not much interested in chess anymore, so it's unlikely we will see AplhaZero vs Stockfish rematch.
Nevertheless, LeelaChessZero, another NN engine built on the same principles as AlphaZero, is currently on +1 after 76 games in a TCEC final match against Stockfish. This is a fairly young (about 1 year) Open Source NN engine that can already challange stronges AB engine in the world on quite fair conditions (as far as CPU vs GPU can be compared).
dumkof dumkof 2/19/2019 11:42
@fixpont

Thanks a lot for your detailed technical answer. I thought that both learning and calculation were done on the same supercomputer. I'm really surprised now. But why would Google limit himself by not using the same supercomputer for both tasks? (Learning + Calculation)
dumkof dumkof 2/19/2019 11:23
@celeje

A traditional computer doesn't "solve" anything, it only calculates according to a programmed algorithm.

Alphazero instead, has no pre-programmed algorithm. The only given thing are the chess rules. It learns and builds it's own algorithm. It has the ability to learn and apply. In other words, it perfectly "solves" complex problems according to a given rule, without any other human intervention. Mr Friedel's "solving" expression is fully appropriate, in my opinion.
celeje celeje 2/19/2019 06:03
Frederic: "AlphaZero is just an early example of computers solving complex problems without human intervention."

This is wrong.

1. It's not "solving". 2. It's not an early example. Computers have been tackling complex problems without human intervention for a long time.
afiedito afiedito 2/19/2019 05:25
I enjoyed reading and learning more about AI and chess. I highly recommend the new book Game Changer by Sadler and Reagan, which discuss history of computer chess program and new insights learned from analyzing games between Alpha zero and other powerful engines. Very exiting the future of computer programs and chess.. .
fixpont fixpont 2/19/2019 02:32
dumkof: it was not run on a supercomputer, it was run on 4 first generation google TPU unit

"According to Google’s own documentation, TPU 1.0 was built on a 28nm process node at TSMC, clocked at 700MHz, and consumed 40W of power. Each TPU PCB connected via PCIe 3.0 x16."

so it was running on a hardware that power consumption is less than a modern GPU (the learning process was on a supercomputer, it used 5000 TPU v1 and 64 TPU v2)

"Countless chess and computer fans have been waiting for such a match (with equal hardware). Why still ignoring this? "

they cant run on the same hw, they are architecturally different
jaberwocky jaberwocky 2/19/2019 12:56
Computer skills in many fields are increasing dramatically. Some people worry a lot about what roles will be left for humans in the future.
However, the human mind is still the most complicated and wonderful thing known to science. Let's believe and hope that human judgement and free will can remain very important.
fgkdjlkag fgkdjlkag 2/18/2019 08:29
Just because Stockfish match was unfair does not mean that Stockfish was stronger.
The 9 hours comment is meaningless because it depends on the hardware. Running my computer for 9 hours is not the same as running $40 million of hardware for 9 hours.
dumkof dumkof 2/18/2019 07:12
Will we ever see a repeated Stockfish - Alphazero match, but this time with both parties working on the same supercomputer? That would be a real test and real comparison.

Countless chess and computer fans have been waiting for such a match (with equal hardware). Why still ignoring this?
fons3 fons3 2/18/2019 04:26
>>"A friend who is an expert on computer chess uploaded all the tools needed to build a neural network for chess, and the machine went to work playing an average 95,000 games per day against itself, learning from them and from other games. In a few months, we hope, it will reach the AlphaZero level of play and maybe even go further."

No offense but either you are trying to fool the audience or you are surprisingly uninformed about the subject considering who you are.

I am assuming you are using one of the LCZero versions, what else? It took LCZero about a year of development and support from a community of hundreds of people who all have similar or stronger hardware to help contribute to reach Stockfish level of play.

In the meantime the score between LCZero & Stockfish in the 100 game TCEC superfinal is currently 72 vs 72, which already is an historic achievemenet. (Especially considering that LCZero lost several points due to blind spots (or bugs) that still exist in the system. More info on the LCZero blog.)
RoselleDragon RoselleDragon 2/18/2019 04:10
Articles like this make me want to live for ever.
Aristarchus Aristarchus 2/18/2019 02:22
I would also remind the existence of Leela Chess Zero...
hurwitz hurwitz 2/18/2019 01:49
Nice article!
1