The future is here – AlphaZero learns chess

by Albert Silver
12/6/2017 – Imagine this: you tell a computer system how the pieces move — nothing more. Then you tell it to learn to play the game. And a day later — yes, just 24 hours — it has figured it out to the level that beats the strongest programs in the world convincingly! DeepMind, the company that recently created the strongest Go program in the world, turned its attention to chess, and came up with this spectacular result.

ChessBase 18 - Mega package ChessBase 18 - Mega package

Winning starts with what you know
The new version 18 offers completely new possibilities for chess training and analysis: playing style analysis, search for strategic themes, access to 6 billion Lichess games, player preparation by matching Lichess games, download Chess.com games with built-in API, built-in cloud engine and much more.

More...

DeepMind and AlphaZero

About three years ago, DeepMind, a company owned by Google that specializes in AI development, turned its attention to the ancient game of Go. Go had been the one game that had eluded all computer efforts to become world class, and even up until the announcement was deemed a goal that would not be attained for another decade! This was how large the difference was. When a public challenge and match was organized against the legendary player Lee Sedol, a South Korean whose track record had him in the ranks of the greatest ever, everyone thought it would be an interesting spectacle, but a certain win by the human. The question wasn’t even whether the program AlphaGo would win or lose, but how much closer it was to the Holy Grail goal. The result was a crushing 4-1 victory, and a revolution in the Go world. In spite of a ton of second-guessing by the elite, who could not accept the loss, eventually they came to terms with the reality of AlphaGo, a machine that was among the very best, albeit not unbeatable. It had lost a game after all.AlphaGo logo

The saga did not end there. A year later a new updated version of AlphaGo was pitted against the world number one of Go, Ke Jie, a young Chinese whose genius is not without parallels to Magnus Carlsen in chess. At the age of just 16 he won his first world title and by the age of 17 was the clear world number one. That had been in 2015, and now at age 19, he was even stronger. The new match was held in China itself, and even Ke Jie knew he was most likely a serious underdog. There were no illusions anymore. He played superbly but still lost by a perfect 3-0, a testimony to the amazing capabilities of the new AI.

Many chess players and pundits had wondered how it would do in the noble game of chess. There were serious doubts on just how successful it might be. Go is a huge and long game with a 19x19 grid, in which all pieces are the same, and not one moves. Calculating ahead as in chess is an exercise in futility so pattern recognition is king. Chess is very different. There is no questioning the value of knowledge and pattern recognition in chess, but the royal game is supremely tactical and a lot of knowledge can be compensated for by simply outcalculating the opponent. This has been true not only of computer chess, but humans as well.

However, there were some very startling results in the last few months that need to be understood. DeepMind’s interest in Go did not end with that match against the number one. You might ask yourself what more there was to do after that? Beat him 20-0 and not just 3-0? No, of course not. However, the super Go program became an internal litmus test of a sorts. Its standard was unquestioned and quantified, so if one wanted to test a new self-learning AI, and how good it was, then throwing it at Go and seeing how it compared to the AlphaGo program would be a way to measure it.

A new AI was created called AlphaZero. It had several strikingly different changes. The first was that it was not shown tens of thousands of master games in Go to learn from, instead it was shown none. Not a single one. It was merely shown the rules, without any other information. The result was a shock. Within just three days its completely self-taught Go program was stronger than the version that had beat Lee Sedol, a result the previous AI had needed over a year to achieve. Within three weeks it was beating the strongest AlphaGo that had defeated Ke Jie. What is more: while the Lee Sedol version had used 48 highly specialized processors to create the program, this new version used only four!

Graph showing the relative evolution of AlphaZero : Source: DeepMind

AlphaZero learns Chess

Approaching chess might still seem unusual. After all, although DeepMind had already shown near revolutionary breakthroughs thanks to Go, that had been a game that had yet to be ‘solved’. Chess already had its Deep Blue 20 years ago, and today even a good smartphone can beat the world number one. What is there to prove exactly?

Garry Kasparov is seen chatting with Demis Hassabis, founder of DeepMind | Photo: Lennart Ootes

It needs to be remembered that Demis Hassabis, the founder of DeepMind has a profound chess connection of his own. He had been a chess prodigy in his own right, and at age 13 was the second highest rated player under 14 in the world, second only to Judit Polgar. He eventually left the chess track to pursue other things, like founding his own PC video game company at age 17, but the link is there. There was still a burning question on everyone’s mind: just how well would AlphaZero do if it was focused on chess? Would it just be very smart, but smashed by the number-crunching engines of today where a single ply is often the difference between winning or losing? Or would something special come of it?

Professor David Silver explains how AlphaZero was able to progress much quicker when it had to learn everything on its own as opposed to analzying large amounts of data. The efficiency of a principled algorithm was the most important factor.

A new paradigm 

On December 5 the DeepMind group published a new paper at the site of Cornell University called "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm", and the results were nothing short of staggering. AlphaZero had done more than just master the game, it had attained new heights in ways considered inconceivable. The test is in the pudding of course, so before going into some of the fascinating nitty-gritty details, let’s cut to the chase. It played a match against the latest and greatest version of Stockfish, and won by an incredible score of 64 : 36, and not only that, AlphaZero had zero losses (28 wins and 72 draws)!

Stockfish needs no introduction to ChessBase readers, but it's worth noting that the program was on a computer that was running nearly 900 times faster! Indeed, AlphaZero was calculating roughly 80 thousand positions per second, while Stockfish, running on a PC with 64 threads (likely a 32-core machine) was running at 70 million positions per second. To better understand how big a deficit that is, if another version of Stockfish were to run 900 times slower, this would be equivalent to roughly 8 moves less deep. How is this possible?

The paper "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" at Cornell University

The paper explains:

“AlphaZero compensates for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations – arguably a more “human-like” approach to search, as originally proposed by Shannon. Figure 2 shows the scalability of each player with respect to thinking time, measured on an Elo scale, relative to Stockfish or Elmo with 40ms thinking time. AlphaZero’s MCTS scaled more effectively with thinking time than either Stockfish or Elmo, calling into question the widely held belief that alpha-beta search is inherently superior in these domains.”

This diagram shows that the longer AlphaZero had to think, the more it improved compared to Stockfish

In other words, instead of a hybrid brute-force approach, which has been the core of chess engines today, it went in a completely different direction, opting for an extremely selective search that emulates how humans think. A top player may be able to outcalculate a weaker player in both consistency and depth, but it still remains a joke compared to what even the weakest computer programs are doing. It is the human’s sheer knowledge and ability to filter out so many moves that allows them to reach the standard they do. Remember that although Garry Kasparov lost to Deep Blue it is not clear at all that it was genuinely stronger than him even then, and this was despite reaching speeds of 200 million positions per second. If AlphaZero is really able to use its understanding to not only compensate 900 times fewer moves, but surpass them, then we are looking at a major paradigm shift.

How does it play?

Since AlphaZero did not benefit from any chess knowledge, which means no games or opening theory, it also means it had to discover opening theory on its own. And do recall that this is the result of only 24 hours of self-learning. The team produced fascinating graphs showing the openings it discovered as well as the ones it gradually rejected as it grew stronger!

Professor David Silver, lead scientist behind AlphaZero, explains how AlphaZero learned openings in Go, and gradually began to discard some in favor of others as it improved. The same is seen in chess.

In the diagram above, we can see that in the early games, AlphaZero was quite enthusiastic about playing the French Defense, but after two hours (this so humiliating) began to play it less and less.

The Caro-Kann fared a good deal better, and held a prime spot in AlphaZero's opening choices until it also gradually filtered it out. So what openings did AlphaZero actually like or choose by the end of its learning process? The English Opening and the Queen's Gambit!

The paper also came accompanied by ten games to share the results. It needs to be said that these are very different from the usual fare of engine games. If Karpov had been a chess engine, he might have been called AlphaZero. There is a relentless positional boa constrictor approach that is simply unheard of. Modern chess engines are focused on activity, and have special safeguards to avoid blocked positions as they have no understanding of them and often find themselves in a dead end before they realize it. AlphaZero has no such prejudices or issues, and seems to thrive on snuffing out the opponent’s play. It is singularly impressive, and what is astonishing is how it is able to also find tactics that the engines seem blind to.

 

In this position from Game 5 of the ten published, this position arose after move 20...Kh8. The completely disjointed array of Black’s pieces is striking, and AlphaZero came up with the fantastic 21.Bg5!! After analyzing it and the consequences, there is no question this is the killer move here, and while my laptop cannot produce 70 million positions per second, I gave it to Houdini 6.02 with 9 million positions per second. It analyzed it for one full hour and was unable to find 21.Bg5!!

A screenshot of Houdini 6.02 after an hour of analysis

Here is another little gem of a shot, in which AlphaZero had completely stymied Stockfish positionally, and now wraps it up with some nice tactics. Look at this incredible sequence in game nine:

 

Here AlphaZero played the breathtaking 30. Bxg6!! The threat is obviously 30...fxg6 31. Qxe6+, but how do you continue after the game's 30...Bxg5 31. Qxg5 fxg6?

 

Here AlphaZero continued with 32. f5!! and after 32...Rg8 33. Qh6 Qf7 34. f6 obtained a deadly bind, and worked it into a win 20 moves later. Time to get a thesaurus for all the references synonymous of 'amazing'. 

What lies ahead

So where does this leave chess, and what does it mean in general? This is a game-changer, a term that is so often used and abused, and there is no other way of describing it. Deep Blue was a breakthrough moment, but its result was thanks to highly specialized hardware whose purpose was to play chess, nothing else. If one had tried to make it play Go, for example, it would have never worked. This completely open-ended AI able to learn from the least amount of information and take this to levels hitherto never imagined is not a threat to ‘beat’ us at any number of activities, it is a promise to analyze problems such as disease, famine, and other problems in ways that might conceivably lead to genuine solutions.  

For chess, this will likely lead to genuinely breakthrough engines following in these footsteps. That is what happened in Go. For years and years, Go programs had been more or less stuck where they were, unable to make any meaningful advances, and then came along AlphaGo. It wasn't because AlphaGo offered some inspiration to 'try harder', it was because just as here, a paper was published detailing all the techniques and algorithms developed and used so that others might follow in their footsteps. And they did. Literally within a couple of months, new versions of top programs such as Crazy Stone, began offering updated engines with Deep Learning, which brought hundreds (plural) of Elo in improvement. This is no exaggeration.

Within a couple of months, the revolutionary techniques used to create AlphaGo began to appear in top PC programs of Go

The paper on chess offers similar information allowing anyone to do what they did. Obviously they won't have the benefit of the specialized TPUs, a processor designed especially for this deep learning training, but nor are they required to do so. It bears remembering that this was also done without the benefit of many of the specialized programming techniques and tricks in chess programming. Who is to say they cannot be combined for even greater results? Even the DeepMind team think it bears investigating:

"It is likely that some of these techniques could further improve the performance of AlphaZero; however, we have focused on a pure self-play reinforcement learning approach and leave these extensions for future research."

Replay the ten games between AlphaZero and Stockfish 8 (70 million NPS)

Links


Born in the US, he grew up in Paris, France, where he completed his Baccalaureat, and after college moved to Rio de Janeiro, Brazil. He had a peak rating of 2240 FIDE, and was a key designer of Chess Assistant 6. In 2010 he joined the ChessBase family as an editor and writer at ChessBase News. He is also a passionate photographer with work appearing in numerous publications, and the content creator of the YouTube channel, Chess & Tech.

Discuss

Rules for reader comments

 
 

Not registered yet? Register

Bobet5 Bobet5 4/4/2018 02:17
Everything is now "mechanistic!"Meaning, from the first twenty-five or thirty moves of any known opening theory along with its variations, from the starting point toward its middle game and onward even to endgame, all chess moves can now be " statistically predicted, " a scenario already forecast in the past by British mathematician and genius Alan Turing. With such an advent of today's modern - day computer technology or artificial intelligence along with its ingenious state-of-the-art chess programs, it is now more than ever can human creativity with its intelligence and logic can be at a terrible loss when pitted to a computer. The human mind who discovered the idea and the technology related to computerization is now on the brink of losing a serious brain game, a chess game to a also man-made machine which he, himself created. If such trend continues for a longtime, then, one day the chess world with its players might find themselves losing their taste and love for the game due to its non-creativity playing against an A.I. who can think millions ahead without getting tired! Sure, no serious chess players would want such day to continue for a long time. How can human chess playing minds with its superlative grandmasters and world champions regain the love, appreciation and dedication to the game, a game which is now "ruled by machine?" Its only by reformatting or reshuffling or arranging the thirty-two chess pieces into the sixty-four square can a chess world with its chess players can uplift its spirit anew. How? It is by "random piece arrangement," with very few chess rules to be amended. I think it is "FISCHER'S RANDOM CHESS," a new piece arrangements in a sixty-four square board in what the FIDE officials and of course, with superlative and ingenious grandmasters must work together so as to regain the game's decency, creativity and originality of the said boardgame.
celeje celeje 3/16/2018 09:46
@Dice960:

No.
Stockfish was horribly misconfigured for Alpha Zero.
Top GMs would beat the horribly misconfigured version.
Dice960 Dice960 3/12/2018 04:52
A new Golden Age of chess has officially been ushered in, as people have finally gotten an idea of what a Morphy-Steinitz match would have bene like, and "Morphy" (AlphaZero) crushed "Steinitz." We now have uncertainty at the very highest levels of the game, with AlphaZero validing the positional sacrifice, and Stockfish still unbeatable by humans. Swashbuckling sacrificial players who thought their style was refuted are returning with a vengeance, sacrificing not just pawns, but pieces, or even most of the army to clear the way for the "Delta Force" of remaining pieces to fly in. We can't trust computer evaluations of anything, because even AlphaZero may be missing something at 3700. We have 900 points to improve by copying what the machines are doing. Chess is now officially a video game. Human coaching has been proven to harm the machines. Fascinating. People are treating Stockfish like an aging warrior out of a Jetsons episode.
celeje celeje 1/12/2018 05:48
@kikouyou:
DeepMind ne voudra probablement pas que nous les voyions.
Ils ne veulent pas avoir l'air mauvais.
kikouyou kikouyou 1/11/2018 02:30
Ou peut-on trouver les parties nulles (env. 70) Merci d'avnce.
QuantumMenace QuantumMenace 12/19/2017 04:49
Nah ! AI is far from reality,more human assisted AI I think, I can beat Alpha Zero, on problem. how is it even possible when computers lack human creativity, all other engines are brute force engines bloated with opening theory, middle game theory, endgame theory, positional theory, strategising as per past world world champions theory not to mention the plethora of gm games and bases, pawn structures, I cannot believe that stock-fish made such an obvious mistake !

Many thanks
QuatumMenace
jsaldea12 jsaldea12 12/16/2017 11:21
Pushing the puzzle to 8,9,10 movers increase complications exponentally and I have no chess computer to aid me,, I know computer has a limit, And most of all, my puzzle is too beautiful.That is why I like to answer of Dr.Demis Hassabis if Alpha Zero has solved the revised puzzle, below, althou8gh shortened to 8 mover. Please let Demis for an answer without human intervfention.. jsa `12. 12.17.17
jsaldea12 jsaldea12 12/16/2017 06:12
You are right, Boy, i made a booboo.. but it is not so good but white mates black in 8 moves
Corrected Position: White: Pa2, Ka3, Pa4, P-a5, Ba7, Ba8, Pc2, Pd6,Pe2, Nf3, Ng1, Pg5
Black Pa6,P Pc3. Kc4, Pc5, B-c8, N-d5,Pd7, Pe3, P-f5, P-g3, Pg7, Ph6

Solution 1: (1) Pg5xPh6…PxP(2) B- b6...P-h5 (3) B-d8…P-h4 (4) b-g5…P-f4 (5) BxPf4…P-h3 (6) BxPe3…NxB (7)N-e5…K-d4 (8)Ng1- f3 Mate.
Solution 2 MORE BEAUTIFUL SOLUTION : (1) Pg6 P-h5 (2) B-b6 P-h4 (3) B-d8…P-h3 (4) b-g5…P-f4 (5) BxPf4…P-h2 (6) BxPe3…NxB (7)N-e5…K-d4 (8)Ng1- f3 Mate.
I think this is now chicken pie to AlphaZero. Thank you all and regards
Jsaldea12 dec. 16, 2017.
jsaldea12 jsaldea12 12/16/2017 05:43
saldea12 Just now
The position and solution to the chess puzzle.: White mates black in 8 moves. (Cannot make mate in 9 moves)

You are right, i made a booboo.. but it is white mates black in 8 moves

Corrected Position: White: Pa2, Ka3, Pa4, P-a5, Ba7, Ba8, Pc2, Pd6,Pe2, Nf3, Ng1, Pg5
Black Pa6,P Pc3. Kc4, Pc5, B-c8, N-d5,Pd7, Pe3, P-f5, P-g3, Pg7, Ph6

Solution 1: (1) Pg5xPh6…PxP(2) B- b6...P-h5 (3) B-d8…P-h4 (4) b-g5…P-f4 (5) BxPf4…P-h3 (6) BxPe3…NxB (7)N-e5…K-d4 (8)Ng1- f3 Mate.
jsaldea12 jsaldea12 12/16/2017 05:39
The position and solution to the chess puzzle.: White mates in 9 moves. (Cannot make mate in 9 moves)

You are right, i made a booboo.. but it is white mates black in 8 moves
Corrected Position: White: Pa2, Ka3, Pa4, P-a5, Ba7, Ba8, Pc2, Pd6,Pe2, Nf3, Ng1, Pg5
Black Pa6,P Pc3. Kc4, Pc5, B-c8, N-d5,Pd7, Pe3, P-f5, P-g3, Pg7, Ph6

Solution 1: (1) Pg5xPh6…PxP(2) B- b6...P-h5 (3) B-d8…P-h4 (4) b-g5…P-f4 (5) BxPf4…P-h3 (6) BxPe3…NxB (7)N-e5…K-d4 (8)Ng1- f3 Mate.
jsaldea12 jsaldea12 12/16/2017 01:52
Bov said: "I'm still not sure if you are just a troll or really mythomaniac. Facts are: 1) your puzzle has no mate in 9 solution 2) the solution you gave is wrong and 3) mate in 9 are no big challenges for engines."

Boy, show your no solution.

jose s. aldea
12.16.17
jsaldea12 jsaldea12 12/16/2017 01:41
The position and solution to the chess puzzle.: White mates in 9 moves.

Position: White: Pa2, Ka3, Pa4, Ba7, Ba8, Pb5, Pc2, Pd6,Pe2, Nf3, Ng1, Pg5
Black Pc3. Kc4, Pc5, B-c8, N-d5,Pd7, Pe3, P-g3, P-f6, Pg7, Ph6

Solution 1: (1)Pg5xPf6 or Pg5xPh6…PxP(2) B- b6...P-h5 (3) B-d8…P-h4 (4) bd8xpf6…p-h3 (5) b-g5…B-b7 (6) BxB…P-h2 (7) BxPe3…NxB (8)N-e5…K-d4 (9)Ng1- f3 Mate.

Again, i like to thank CHESS EXPERT, Mark Erenburg, arbiter, world cup, for prodding and prodding undersigned until undersigned made it perfect. About the potentials of AlphaZero: it opens a new dimension.

Jose S. Aldea
12.16.17
o
Bov Bov 12/12/2017 04:18
@jsaldea12: I'm still not sure if you are just a troll or really mythomaniac.
Facts are: 1) your puzzle has no mate in 9 solution 2) the solution you gave is wrong and 3) mate in 9 are no big challenges for engines
TRM1361 TRM1361 12/12/2017 04:13
@Rasmonte 12/9/2017 11:29
Thanks
I didn't know that. An April Fools joke and it got me. I remember hearing about it long after (early 80s) that and saying "WTF? H4? I can beat that".
jsaldea12 jsaldea12 12/12/2017 01:32
Now, it is final. Alphaz cannot solved the puzzle White to mate black in 9 moves.:Position: White: Pa2, Ka3, Pa4, Ba7, Ba8, Pb5, Pc2, Pd6,Pe2, Nf3, Ng1, Pg5
Black Pc3. Kc4, Pc5, B-c8, N-d5,Pd7, Pe3, P-g3, P-f6, Pg7, Ph6. If it is mate in 4 or 5 fives, it would be chicken pie to Komodo, stockfish and to AlphaZ but when it a 9-mover and up puzzles complex, there is a limit to what these super-computers can perform. Although using algorithm principles, Alpha Z can reach up.
original sin original sin 12/11/2017 10:39
فک میکنم تمامی شاخه های شناخته شده و استاندارد و تغییر بده
pcst pcst 12/11/2017 01:31
หมากรุกเพื่อมิตรภาพ or Chess for friendship.
pcst pcst 12/11/2017 01:27
Chess play everday We play chess for 3 worlds don't worry AZ if you clever enoung or smart chess thinking somebody will come to play again you all the time don't shy to be play Vs everything that can play chess. Just do it for the correct position that you think it won or draw if you still lost it mean you still have bug in yours program you need to solve the problem until it Best clever Engine. Please Make you Aim for everything who realy like play chess.
Regards. We play for friendship not for Kill another another Idea people.
jsaldea12 jsaldea12 12/11/2017 12:00
I posted the puzzle to facebook of Demi Hassabis, early morning Dec. 7, and I posted the puzzle in en-chess, and chess 24, several times on same day, hoping for response from AlphaZ to mincement the puzzle, Then on Dec. 8, 2017, after 20 hours , more or less, no response. But maybe it would be worth trying: Let AlphaZ solves, BY ITSELF, the puzzle withjout human intervention. (I still like to think that human, the maker, prevails over machine.
jsaldea12 jsaldea12 12/10/2017 11:53
Just imagine, a general commanding his army to fight, and it does, But in AlphaZ, it is more than that, it obeys the command and EXECUTE IT BY ITSELF to perfection. This is what happened in that match with stockfish. Maybe Deep Mind, AlphaZ, can be commanded to fight these cancer cells (Google will not buy for $400 million? for Deep Mind) or how to probe deeper recesses of the universe.,etc. and know the secrets.
fgkdjlkag fgkdjlkag 12/10/2017 11:43
@jsaldea12, how do you know that the makers of AlphaZero saw your puzzle?

Another strange point about the paper - they only published games in which alphazero won, even though one-third of the total games was reportedly drawn by stockfish (despite lack of its opening book). For a scientific paper, they should have randomly selected games, to give an accurate representation of alphazero. But their chief aim seems to be to showoff their product and google (also reflected in the match conditions).
guilhermevoncalm guilhermevoncalm 12/10/2017 11:24
Incredible advance in the history of mankind. More than stepping on the moon. The way it is going all earth problems will be solved. Free exchange of ideas and money will be superfluous.
jsaldea12 jsaldea12 12/10/2017 11:23
About that chess puzzle, 9-mover, of mine, I like to thank GM Mark Erenburg, arbiter, World chess CUP, for having been patient and kind to prod and prod me many times until I perfected the puzzle..in time for the anouncement of AlphaZ demolition of stockfish. I was thinking AlphaZ would make a mincement of the puzzle. It did not happened. It is still likely that chess puzzles, 9-movers and up, are not yet programmed (commanded) to super-computers, like Komodo, stockfish, and even Alphaz.
APonti APonti 12/10/2017 06:15
I´m thinking of quiting chess. This is too humiliating - a program that in 40 days "learns" Go and beats the WCh... and after just 4 hours of "learning" chess beats one of the best chess programs...

Please make a program like that to search for the cure of cancer, for instance !
jwcb jwcb 12/10/2017 05:35
I think we need to be careful about drawing conclusions such as, "The English is the best opening." The 12 openings shown in Table 2 in the paper (https://arxiv.org/pdf/1712.01815.pdf) show the 12 openings most often played by humans, not those most often played by AlphaZero during its self-play training. Furthermore, look more closely at the graph for the English Opening in the table. After 8 hours of self-play, it was playing the English about 7 percent of the time. That means it was playing something else 93 percent of the time. We don't know what that was. The same story goes for the Queen's Gambit.
celeje celeje 12/10/2017 12:48
@fgkdjlkag, @tjallen: I'd guess they had to do this because of finite computer resources (to do with how games are represented). This makes me also think they probably did this only for the training phase. Of course, it'd be ridiculous if they did this for the match. This is another thing they need to clarify.
Aramantik Aramantik 12/9/2017 07:07
not to mention that SF played 8 games out of 10 with black :) not fair testing . Give it power book and make it play powerful openings and always give rematch with opposite color.
Aramantik Aramantik 12/9/2017 06:59
In my opinion. SF was not given a fair chance. It was forced to play bad openings with black and never even was given a chance to play same opening with opposite color.
Maybe Alpha Zero can play without book but SF and other chess engines play significantly weaker without strong books.
The least that should have been done is to allow SF to take revenge with opposite color in same opening. Not to mention that Sf 8 is old and new versions are available
e-mars e-mars 12/9/2017 06:56
@fgkdjlkag The point is not whether Google are primarily moved by commercial / profit reasons or humanitarian but medicine, physics and chess are not comparable at all.

Chess is a PSPACE-complete game: given a position and its set of rules, there's finite sequence of moves that lead either to a win, loss or draw.

AlphaZero was taught chess rules, full stop. It knows everything about that domain.
If you "try" AlphaZero with medicine and physics, you give it a limited sub-set of data about those domains, and it will come up with solutions within the boundaries of those sub-sets. We do NOT know everything about medicine and physics. We cannot teach AlphaZero all the rules about medicine or physics, simply because we don't know all of them.

It is like teaching AlphaZero chess without telling it the en-passant rule: it won't come up one day with "hey! *ass! you didn't tell me about en-passant!". It will simply play chess without en-passant.

The key difference is: AlphaZero and AI cannot INVENT or DISCOVER new things. They can only discover different solutions about something we already have under our nose.
fgkdjlkag fgkdjlkag 12/9/2017 04:30
@tjallen, that is a good point. With the 3-fold repetition and 50-move rule, I see no reason for chess games to be stopped early and declared draws.

Regarding the upper elo limit, it is primarily a factor of the game, as someone posted, but also of the machine/player.

@jsaldea12, talking about a "cure for cancer" is a bit misleading. Take another entity - coronary artery disease. It is known that it is related to diet, exercise, and stress (among many other factors). Is there a single "cure" for it? Cancer is a mutation of cells, which has a number of known risk factors as well, including diet, exercise, stress. Is there any more reason to think that there is a single cure for it (unlike the treatment of a bacteria or virus), and that it is going to be developed by google? Keep in mind these are profit-making corporations, and based on its past actions, its primary motivation is profit, not saving humankind.
celeje celeje 12/9/2017 11:34
@JactaEst: A recent computer chess tournament had Book Off and it seems the openings were still varied. Here's one thing you can try. Run a (e.g. Blitz) tournament of Stockfish 8 against itself, with Book Off, Ponder Off, & exact same settings for the all games. If it's completely deterministic, won't all games be identical? I bet that won't happen and maybe the openings will be varied too. If you can do this, please let us know the results.
Rasmonte Rasmonte 12/9/2017 11:29
@TRM1361
The story with 1. h4 was an april fool's hoax brought by Martin Gardner in his Mathematical Games coloumn in Scientific American in april 1975.
JactaEst JactaEst 12/9/2017 10:52
I'm not clear on this 'no opening book for Stockfish assertion'.
When I ran my Stockfish 8 last year it decided the best defence to d4 was the Ragozin and played it 100% of the time with the opening book off.
If an engine with no randomisation factor/opening book thinks for 60 seconds won't it always reply with exactly the same repsonse? The one it deems to be the best.
And yet in the ten games given Stockfish responded to d4 from AZ with both Nf6 and e6. Also noticeable is that AZ varied its replies to 1..Nf6 and 1..e6.
It's not surprising that a lot of the d4 games ended up in the Queen's Indian. Given a free reign, engines playing white seem to prefer offering black that option rather than the Nimzo.
We really need to see the PGN for all 100 games to see what was going on...
TRM1361 TRM1361 12/9/2017 07:33
So what is AlphaZero's favourite opening with white and black? English & Queen's Gambit were mentioned for white but the paper seems to have a very low opinion of black's choices.

Doesn't like Queen's Pawn, King's Indian, French, Sicilian. Liked the Caro-Kann up to hour 6 then abandoned it.

Did it invent a new one? I remember way back some computer had done this self play type of stuff and claimed "H4" as white to be the best :)
Robert Fowler Robert Fowler 12/9/2017 06:19
Alpha Zero's "Immortal Zugzwang Game" against Stockfish analyzed: https://youtu.be/lFXJWPhDsSY
pcst pcst 12/9/2017 05:49
Wait so long AZ. DON'T SHY TO BE PLAY ON TEST BUG MODE. I SHOW YOU SOME OF MY GAME ALREADY.
celeje celeje 12/9/2017 04:37
@wasmaster:
Also, please look through all the previous comments here. You'll see comments relevant to your claims (e.g. you'll see that you are plain wrong to claim that AZ performed 200 ELO stronger).
Masquer Masquer 12/9/2017 04:33
@wasmaster
Isn't it a false statement to claim that Elo difference was 200 Elo, when in fact it was just 100 Elo for the 100-game match? Just use an Elo calculator, it will tell you the correct difference.
celeje celeje 12/9/2017 04:30
@wasmaster:
Google (i.e. the same company behind AZ) boasted this year that their TPU is to be up to 30 times higher performance than a contemporary CPU. So going by their own words, 4 TPUs is equivalent to up to 120 CPUs.

They did not run Stockfish on 64 CPUs. They ran it on 64 threads, so it perhaps was 32 cores, which may have in turn been perhaps 8 CPUs (each quad-core).

It looks like AZ had a huge hardware advantage.
kaimiddleton kaimiddleton 12/9/2017 04:16
I would like to know, in precise terms, what the power usage of the Alpha-zero 4 TPU system is compared to the power usage of the 64 core system that Stockfish was running on. If those numbers are comparable then in general I don't have a problem with Alpha-zero having "superior hardware". (Quibbles about the 1GB of Stockfish hash aside.)

On the talkchess forum someone proposed the following, but I don't know if it's correct: "1 Google TPU is around 50W, basically you have them 4 and another Haswell to run actual MCTS on those so around 300W. SF's hardware was most probably two 32 core CPUs each at 150W, so around 300W also."