The future is here – AlphaZero learns chess

by Albert Silver
12/6/2017 – Imagine this: you tell a computer system how the pieces move — nothing more. Then you tell it to learn to play the game. And a day later — yes, just 24 hours — it has figured it out to the level that beats the strongest programs in the world convincingly! DeepMind, the company that recently created the strongest Go program in the world, turned its attention to chess, and came up with this spectacular result.

ChessBase 14 Download ChessBase 14 Download

Everyone uses ChessBase, from the World Champion to the amateur next door. Start your personal success story with ChessBase 14 and enjoy your chess even more!


Along with the ChessBase 14 program you can access the Live Database of 8 million games, and receive three months of free ChesssBase Account Premium membership and all of our online apps! Have a look today!

More...

DeepMind and AlphaZero

About three years ago, DeepMind, a company owned by Google that specializes in AI development, turned its attention to the ancient game of Go. Go had been the one game that had eluded all computer efforts to become world class, and even up until the announcement was deemed a goal that would not be attained for another decade! This was how large the difference was. When a public challenge and match was organized against the legendary player Lee Sedol, a South Korean whose track record had him in the ranks of the greatest ever, everyone thought it would be an interesting spectacle, but a certain win by the human. The question wasn’t even whether the program AlphaGo would win or lose, but how much closer it was to the Holy Grail goal. The result was a crushing 4-1 victory, and a revolution in the Go world. In spite of a ton of second-guessing by the elite, who could not accept the loss, eventually they came to terms with the reality of AlphaGo, a machine that was among the very best, albeit not unbeatable. It had lost a game after all.AlphaGo logo

The saga did not end there. A year later a new updated version of AlphaGo was pitted against the world number one of Go, Ke Jie, a young Chinese whose genius is not without parallels to Magnus Carlsen in chess. At the age of just 16 he won his first world title and by the age of 17 was the clear world number one. That had been in 2015, and now at age 19, he was even stronger. The new match was held in China itself, and even Ke Jie knew he was most likely a serious underdog. There were no illusions anymore. He played superbly but still lost by a perfect 3-0, a testimony to the amazing capabilities of the new AI.

Many chess players and pundits had wondered how it would do in the noble game of chess. There were serious doubts on just how successful it might be. Go is a huge and long game with a 19x19 grid, in which all pieces are the same, and not one moves. Calculating ahead as in chess is an exercise in futility so pattern recognition is king. Chess is very different. There is no questioning the value of knowledge and pattern recognition in chess, but the royal game is supremely tactical and a lot of knowledge can be compensated for by simply outcalculating the opponent. This has been true not only of computer chess, but humans as well.

However, there were some very startling results in the last few months that need to be understood. DeepMind’s interest in Go did not end with that match against the number one. You might ask yourself what more there was to do after that? Beat him 20-0 and not just 3-0? No, of course not. However, the super Go program became an internal litmus test of a sorts. Its standard was unquestioned and quantified, so if one wanted to test a new self-learning AI, and how good it was, then throwing it at Go and seeing how it compared to the AlphaGo program would be a way to measure it.

A new AI was created called AlphaZero. It had several strikingly different changes. The first was that it was not shown tens of thousands of master games in Go to learn from, instead it was shown none. Not a single one. It was merely shown the rules, without any other information. The result was a shock. Within just three days its completely self-taught Go program was stronger than the version that had beat Lee Sedol, a result the previous AI had needed over a year to achieve. Within three weeks it was beating the strongest AlphaGo that had defeated Ke Jie. What is more: while the Lee Sedol version had used 48 highly specialized processors to create the program, this new version used only four!

Graph showing the relative evolution of AlphaZero : Source: DeepMind

AlphaZero learns Chess

Approaching chess might still seem unusual. After all, although DeepMind had already shown near revolutionary breakthroughs thanks to Go, that had been a game that had yet to be ‘solved’. Chess already had its Deep Blue 20 years ago, and today even a good smartphone can beat the world number one. What is there to prove exactly?

Garry Kasparov is seen chatting with Demis Hassabis, founder of DeepMind | Photo: Lennart Ootes

It needs to be remembered that Demis Hassabis, the founder of DeepMind has a profound chess connection of his own. He had been a chess prodigy in his own right, and at age 13 was the second highest rated player under 14 in the world, second only to Judit Polgar. He eventually left the chess track to pursue other things, like founding his own PC video game company at age 17, but the link is there. There was still a burning question on everyone’s mind: just how well would AlphaZero do if it was focused on chess? Would it just be very smart, but smashed by the number-crunching engines of today where a single ply is often the difference between winning or losing? Or would something special come of it?

Professor David Silver explains how AlphaZero was able to progress much quicker when it had to learn everything on its own as opposed to analzying large amounts of data. The efficiency of a principled algorithm was the most important factor.

A new paradigm 

On December 5 the DeepMind group published a new paper at the site of Cornell University called "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm", and the results were nothing short of staggering. AlphaZero had done more than just master the game, it had attained new heights in ways considered inconceivable. The test is in the pudding of course, so before going into some of the fascinating nitty-gritty details, let’s cut to the chase. It played a match against the latest and greatest version of Stockfish, and won by an incredible score of 64 : 36, and not only that, AlphaZero had zero losses (28 wins and 72 draws)!

Stockfish needs no introduction to ChessBase readers, but it's worth noting that the program was on a computer that was running nearly 900 times faster! Indeed, AlphaZero was calculating roughly 80 thousand positions per second, while Stockfish, running on a PC with 64 threads (likely a 32-core machine) was running at 70 million positions per second. To better understand how big a deficit that is, if another version of Stockfish were to run 900 times slower, this would be equivalent to roughly 8 moves less deep. How is this possible?

The paper "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" at Cornell University

The paper explains:

“AlphaZero compensates for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations – arguably a more “human-like” approach to search, as originally proposed by Shannon. Figure 2 shows the scalability of each player with respect to thinking time, measured on an Elo scale, relative to Stockfish or Elmo with 40ms thinking time. AlphaZero’s MCTS scaled more effectively with thinking time than either Stockfish or Elmo, calling into question the widely held belief that alpha-beta search is inherently superior in these domains.”

This diagram shows that the longer AlphaZero had to think, the more it improved compared to Stockfish

In other words, instead of a hybrid brute-force approach, which has been the core of chess engines today, it went in a completely different direction, opting for an extremely selective search that emulates how humans think. A top player may be able to outcalculate a weaker player in both consistency and depth, but it still remains a joke compared to what even the weakest computer programs are doing. It is the human’s sheer knowledge and ability to filter out so many moves that allows them to reach the standard they do. Remember that although Garry Kasparov lost to Deep Blue it is not clear at all that it was genuinely stronger than him even then, and this was despite reaching speeds of 200 million positions per second. If AlphaZero is really able to use its understanding to not only compensate 900 times fewer moves, but surpass them, then we are looking at a major paradigm shift.

How does it play?

Since AlphaZero did not benefit from any chess knowledge, which means no games or opening theory, it also means it had to discover opening theory on its own. And do recall that this is the result of only 24 hours of self-learning. The team produced fascinating graphs showing the openings it discovered as well as the ones it gradually rejected as it grew stronger!

Professor David Silver, lead scientist behind AlphaZero, explains how AlphaZero learned openings in Go, and gradually began to discard some in favor of others as it improved. The same is seen in chess.

In the diagram above, we can see that in the early games, AlphaZero was quite enthusiastic about playing the French Defense, but after two hours (this so humiliating) began to play it less and less.

The Caro-Kann fared a good deal better, and held a prime spot in AlphaZero's opening choices until it also gradually filtered it out. So what openings did AlphaZero actually like or choose by the end of its learning process? The English Opening and the Queen's Gambit!

The paper also came accompanied by ten games to share the results. It needs to be said that these are very different from the usual fare of engine games. If Karpov had been a chess engine, he might have been called AlphaZero. There is a relentless positional boa constrictor approach that is simply unheard of. Modern chess engines are focused on activity, and have special safeguards to avoid blocked positions as they have no understanding of them and often find themselves in a dead end before they realize it. AlphaZero has no such prejudices or issues, and seems to thrive on snuffing out the opponent’s play. It is singularly impressive, and what is astonishing is how it is able to also find tactics that the engines seem blind to.

 

In this position from Game 5 of the ten published, this position arose after move 20...Kh8. The completely disjointed array of Black’s pieces is striking, and AlphaZero came up with the fantastic 21.Bg5!! After analyzing it and the consequences, there is no question this is the killer move here, and while my laptop cannot produce 70 million positions per second, I gave it to Houdini 6.02 with 9 million positions per second. It analyzed it for one full hour and was unable to find 21.Bg5!!

A screenshot of Houdini 6.02 after an hour of analysis

Here is another little gem of a shot, in which AlphaZero had completely stymied Stockfish positionally, and now wraps it up with some nice tactics. Look at this incredible sequence in game nine:

 

Here AlphaZero played the breathtaking 30. Bxg6!! The threat is obviously 30...fxg6 31. Qxe6+, but how do you continue after the game's 30...Bxg5 31. Qxg5 fxg6?

 

Here AlphaZero continued with 32. f5!! and after 32...Rg8 33. Qh6 Qf7 34. f6 obtained a deadly bind, and worked it into a win 20 moves later. Time to get a thesaurus for all the references synonymous of 'amazing'. 

What lies ahead

So where does this leave chess, and what does it mean in general? This is a game-changer, a term that is so often used and abused, and there is no other way of describing it. Deep Blue was a breakthrough moment, but its result was thanks to highly specialized hardware whose purpose was to play chess, nothing else. If one had tried to make it play Go, for example, it would have never worked. This completely open-ended AI able to learn from the least amount of information and take this to levels hitherto never imagined is not a threat to ‘beat’ us at any number of activities, it is a promise to analyze problems such as disease, famine, and other problems in ways that might conceivably lead to genuine solutions.  

For chess, this will likely lead to genuinely breakthrough engines following in these footsteps. That is what happened in Go. For years and years, Go programs had been more or less stuck where they were, unable to make any meaningful advances, and then came along AlphaGo. It wasn't because AlphaGo offered some inspiration to 'try harder', it was because just as here, a paper was published detailing all the techniques and algorithms developed and used so that others might follow in their footsteps. And they did. Literally within a couple of months, new versions of top programs such as Crazy Stone, began offering updated engines with Deep Learning, which brought hundreds (plural) of Elo in improvement. This is no exaggeration.

Within a couple of months, the revolutionary techniques used to create AlphaGo began to appear in top PC programs of Go

The paper on chess offers similar information allowing anyone to do what they did. Obviously they won't have the benefit of the specialized TPUs, a processor designed especially for this deep learning training, but nor are they required to do so. It bears remembering that this was also done without the benefit of many of the specialized programming techniques and tricks in chess programming. Who is to say they cannot be combined for even greater results? Even the DeepMind team think it bears investigating:

"It is likely that some of these techniques could further improve the performance of AlphaZero; however, we have focused on a pure self-play reinforcement learning approach and leave these extensions for future research."

Replay the ten games between AlphaZero and Stockfish 8 (70 million NPS)

Links



Born in the US, he grew up in Paris, France, where he completed his Baccalaureat, and after college moved to Rio de Janeiro, Brazil. He had a peak rating of 2240 FIDE, and was a key designer of Chess Assistant 6. In 2010 he joined the ChessBase family as an editor and writer at ChessBase News. He is also a passionate photographer with work appearing in numerous publications.
Discussion and Feedback Join the public discussion or submit your feedback to the editors


Discuss

Rules for reader comments

 
 

Not registered yet? Register

Igor Freiberger Igor Freiberger 12/7/2017 04:02
KingZor: no, the PLOTS in Table 2 are based on database results, the 0/0/0 data for each opening are games played by AZ and Stockfish.

AZ seems to be extremely strong, but the paper has flaws that let me question if its achievements were got under the described conditions.

Regarding other problems it could solve, please remember all this AI is based on situations with known endpoints within a delimited universe of variables (a closed problem). To, for example, find a heal for a disease is far more complex as it is an open problem —although AI could help on this, of course.
psamant psamant 12/7/2017 02:46
@dofski
"...why does AlphaZero approach asymptotically just about the same level as Stockfish about 3400 when it is learning by playing itself. Why not 3000 or 5000 or 10000. Does the 3400 represent some upper limit which is difficult to surpass"
Very interesting observation! Perhaps someone here can come up with a hypothesis for this? Is there some upper limit to ELO rating possibility, specially in the changed circumstances where having more opponents with high rating is no longer a constraint, considering that we have abundant computer programs with 3000 or more of rating levels.
lajosarpad lajosarpad 12/7/2017 02:39
Now it's time to write and execute a Conditional Functional Dependency search on the database of the engine. It might find rules like:

"double isolated pawns are powerful outposts for knight pairs"

whereas we, humans could only reach the conclusion that:

"double isolated pawns are bad... with a few exceptions"

A huge learning phase on cause-effect relationships would yield a lot of staggering aggregate data.
ltoime@gmail.com ltoime@gmail.com 12/7/2017 02:15
It is intriguing how different the Elo learning curves look for the different games. In Shogi, it fascinatingly didn’t always increase - there were occasional small sustained dips.
What happens when AZ plays against itself for a week, or a month, or a year?

Stay tuned.
A7fecd1676b88 A7fecd1676b88 12/7/2017 01:51
So a super computer can learn and figure out good moves, in a very simple game, if the rules don't change. Shocking.
That is not the real world, now is it?
For example, weather models will sometimes have chaos, and so the rules are variable. Humans can handle that. Computers ? Good luck
Yawn. A nothing burger.
dofski dofski 12/7/2017 01:51
Figures 1 and 2 in the paper show that the ELO difference between AlphaZero and Stockfish is "quite low" it seems roughly 100 or less. This seems to account for the score in the match between the two.

But in Figure 1 why does AlphaZero approach asymptotically just about the same level as Stockfish about 3400 when it is learning by playing itself. Why not 3000 or 5000 or 10000. It seems a funny coincidence.

Does the 3400 represent some upper limit which is difficult to surpass.
franis franis 12/7/2017 01:08
What is a proof that Bg5 is better than Be3 or b4?
KingZor KingZor 12/7/2017 01:04
Igor Freiberger, table 2 refers to human games. Interesting that there were only two draws in Shogi out of 100 games, which is about the same frequency as among the Japanese masters. A game about as complex as chess, if not more, but with much more decisive games.
Zvi Mendlowitz Zvi Mendlowitz 12/7/2017 12:55
@jsaldea12
The position is illegal. Also, there is no solution.

I am waiting for AZ to learn a programming language... Then it could make its own AI... which would make its own AI... the first one in 24 hours, the second one in 12 hours, the third one in 6 hours... after 48 hours - singularity :-)
jsaldea12 jsaldea12 12/7/2017 12:16
• Dec. 7. 2017 (7:15PM)

Dr. Demis Hassalbis
Inventor Deep Mind Alpha Go

Sir:

Please see if Alpha Go can solve this 30 years in the making chess puzzle of mine. Would appreciate response from Deep Mind. White mates black in 9 moves.
Position: White: Pa2, Ka3, Pa4, Ba7, Ba8, Pb5, Pc2, Pd6,Pe2, Nf3, Ng1, Pg5
Black Pc3. Kc4, Pc5, B-c8, N-d5,Pd7, Pe3, P-g3, P-f6, Pg7, Ph6

Regards.

Jose S. Aldea
Scientist-inventor
savantKing99 savantKing99 12/7/2017 11:40
it sound too good to be true. And yes of course it is a big step forwards. But after all it is just a simulation. And again AlphaZero will not pass the turing test.
It is just rubbish!! If you read the paper it says that it learned chess by itself in just 24 hours. But in the mean time it played like 1 million games with himself. So it created a openings book!! I don't know how to describe it. But Google just hide a phase in the program that it looks likes that it is self teaching. Apart all form this, of course it is genius. But has nothing to do with AI. Sorry
celeje celeje 12/7/2017 11:02
@bertman: The comment by @rokko below about comparing positions/sec is correct. AZ was not running on a slower computer. It is just calculating in a different way. If you must compare the hardware, note that AZ was running on a machine with 4 TPUs. Google itself claims its TPU is up to 30x higher performance than a contemporary CPU, so AZ is running on the equivalent of up to 120 CPUs. Please change your second paragraph in the section 'A New Paradigm', which is highly misleading at best.
Bojan KG Bojan KG 12/7/2017 10:57
I am fascinated by us humans, we are able to create wonders but at same time we can not cope with ancient virus like rabbies. If this algorithm is able to solve anything by learning on its own then this is one of the most important scientific breakthroughs in history of mankind. The claim that AplhaZero "has beaten" chess is most ridiculous statement I have heard of for many many years.
rokko rokko 12/7/2017 10:49
Finding Bg5 is indeed impressive and shows, like many well-known examples, that usual chess programmes prune too aggressively.

But SF's computer power was NOT great (70000 kN/s is a top desktop computer and 1GB of hash table is a 2010 laptop) and comparing numbers between programmes shows the difference in their approach but not the computing power.

AlphaZero seems to have played 700,000 games, i.e. the output of many years of professional chess, to "train". This is as if SF did not only have an opening book but the MegaDatabase (as a lot of it is old or amateur) as its disponsal.
Bammer Bammer 12/7/2017 10:45
I do not get your conclusion regarding openings: Does this mean the French and the CaroKann are not more than equalizing defenses? Disappointing. A quick look at the ECO classifications in the paper itself also makes me wonder why there is a line of the Bf4 Queens Gambit given as Reti Opening, a Grünfeld as a KingsIndian and a D15 Line ChebanenkoSlav is a D06 QG. Seems like there wasn't a chess player around to do a minimum of proof reading.
pcst pcst 12/7/2017 10:04
I think even Best Program still have Bug. test half second per game it will see it can solve problem of test bug mode.
Busho Busho 12/7/2017 09:57
To everyone complaining that Stockfish was handicapped, consider this: AlphaZero found 21.Bg5!!, and Stockfish couldn't find it even though it had significantly more compute power. Also, consider this: AlphaZero started with ZERO knowledge and was able to self-learn opening theory in 24 hours. It took centuries for humans to gain this kind of insight into chess openings.

This technology can be a game changer. No doubt in my mind.
Bojan KG Bojan KG 12/7/2017 09:51
Complete rubbish - what exactly does cracking or crushing chess mean? Total number of chess games is 10 to the power of 500, number of chess games up to 40 moves is 10 to the power of 120. Of course many of these do not make sense but they are there anyway. Can you just imagine computer power needed to solve chess in its entirety? Even 8-piece tablebase is currently out of reach let alone complete chess. When Komodo, SF and Houdini were developed many, as today, thought these are unbeatable and play perfect chess and it turned out to be big mistake. Every credit to the team who developed this algorithm but cracking/crushing chess? Give me a break.
celeje celeje 12/7/2017 09:44
@fgkdjlkag: I relied on the article in which Nakamura & Larry Kaufman both commented on there being no opening book. (The article does not mention anything about endgame tablebases, though.) I'm guessing the chess journalist interviewing them may have told them the match conditions, so if he's wrong, they'd both have gotten the wrong info.
Igor Freiberger Igor Freiberger 12/7/2017 09:14
Frederic and Albert: are you sure this paper is serious? A number of details are not coherent. Table 1 of the paper says there was a 100-game match between AZ and Stockfish with no defeats by AZ. But Table 2 shows results for the 12 most common openings with 50 games as White and 50 games as Black for each one. Three conclusion here: (1) There was a total of 1200 games between AZ and Stockfish, with 24 losses of AZ; (2) Opening choice was not free, they set both to play a defined number of games for each line; (3) AZ was already self-played tons of learning games when it faced Stockfish. So it was already “learnt” openings and have its own “book” (in form of theta parameters) while Stockfish was completely vulnerable on this part, a huge handcap. There was other points suspect, but I will not invest so much time analysing all this. AZ may be a relevant advance in AI, but the paper does not proves that.
pcst pcst 12/7/2017 09:13
I wolud like to test AlphaZero learns chess where can I test Bug of program. Or maybe AlphaZero learns chess can make to Thai chess program many player likes to test. Thanks for read my comment.
Busho Busho 12/7/2017 09:10
Looks like there are lots of Stockfish lovers here. It will take them a while to realize what just happened: decades of human hours spent on optimization and refinement of search algos & positional heuristics was surpassed by a single self-learning algo left alone to figure it all out in 24 hours.

For me, after the initial feeling of utter amazement, a very deep seated fear has settled in. Like Frederic said, it is the beginning of Skynet. So, now I guess we have to create a time machine, and then go back in time to make sure that Demis continues his chess career and doesn't switch to computer science. Just kidding.

I think this is great news, and let's hope this technology lives up to its promise and solves real world problems and not just chess problems.
Bank2010 Bank2010 12/7/2017 09:10
I had an argument with someone about when chess will be solved. I predicted in 10 years, he said more than 50 years. Now we are wrong. Chess will be solved whenever DeepMind wants.
pcst pcst 12/7/2017 09:01
ณรวบรวมเกมที่เล่นแล้วชนะ สตอกฟิช 8 64 แต่เป็นแบบสปีดโหมด แต่เกมแพ้ก็มีเยอะน่ะครับ แต่เอามาให้ดูแต่ที่คิดว่าโปรแกรมเก่ง ๆ ก็ยังมีบักอยู่ ซึ่งผมเทสมานานกว่า 20 ปีแล้ว example

[Event ""]
[Site ""]
[Date "29/10/2017 19:44:10"]
[Round "1"]
[White "Human"]
[Black "Stockfish 8 64"]
[Opening "King's pawn game"]
[Eco "C20"]
[TimeControl "0.5+0 (Min.+Inc.)"]
[Result "1-0"]

{[%clk 0:00:30] [%clk 0:00:30] } 1. e4 {[%clk 0:00:30] } e5 {[%clk 0:00:28]
} 2. d3 {[%clk 0:00:29] } Nc6 {[%clk 0:00:27] } 3. Bd2 {[%clk 0:00:28]
} Nf6 {[%clk 0:00:26] } 4. Nc3 {[%clk 0:00:27] } Bc5 {[%clk 0:00:24] }
5. h3 {[%clk 0:00:25] } d5 {[%clk 0:00:23] } 6. Nf3 {[%clk 0:00:24] } d4
{[%clk 0:00:22] } 7. Ne2 {[%clk 0:00:24] } a5 {[%clk 0:00:20] } 8. Qc1
{[%clk 0:00:20] } O-O {[%clk 0:00:17] } 9. Ng3 {[%clk 0:00:19] } Be6 {[%clk
0:00:15] } 10. Be2 {[%clk 0:00:18] } Bd7 {[%clk 0:00:14] } 11. Bh6 {[%clk
0:00:16] } gxh6 {[%clk 0:00:14] } 12. Qxh6 {[%clk 0:00:15] } Bb4+ {[%clk
0:00:13] } 13. c3 {[%clk 0:00:14] } dxc3 {[%clk 0:00:12] } 14. O-O {[%clk
0:00:13] } cxb2 {[%clk 0:00:11] } 15. Ng5 {[%clk 0:00:12] } bxa1=Q {[%clk
0:00:10] } 16. Nh5 {[%clk 0:00:11] } Qxf1+ {[%clk 0:00:09] } 17. Bxf1 {[%clk
0:00:10] } a4 {[%clk 0:00:07] } 18. Qg7# {[%clk 0:00:09] } 1-0
MisterX MisterX 12/7/2017 08:45
I let my Stockfish 8 on i7 laptop hardware calculate the position in game 5 after 20...Kh8 over night. It took about 5 hours to find 21.Bg5!!
ConwyCastle ConwyCastle 12/7/2017 08:14
Are you sure Stockfish was on the faster machine and not the other way around?
Igor Freiberger Igor Freiberger 12/7/2017 08:08
So chess is dead. Let AZ learn by itself during months and the game will be solved. Maybe a great moment for AI, not for chess.

Two points:

1. Opening choice description is somewhat tricky: AZ begun to choose French Defense and then Caro-Kann, choices made by Black, and ended playing the English, a choice by White.

2. The article talk about Demis Hassabis, but the paper is also signed by Darshan Kumaran, British Champion in 90s and also U12 World Champion.
fgkdjlkag fgkdjlkag 12/7/2017 07:52
Is there confirmation that stockfish opening book was not used? I was relying on @celeje's comments below, and I see that Nakamura mentioned the same point. But how do we know if it was not mentioned in the paper? Presumably it would at least be obvious from looking at the games.
@Werewolf, what is a typical hash? Is that the memory the program uses? And is it mitigated because they ran stockfish on a very specialized system?

If Stockfish was handicapped I suppose the makers of it will be making a statement.
vixen vixen 12/7/2017 07:44
Its a good read
They best is they throw an open challenge now to human or machine
Then its something
Werewolf Werewolf 12/7/2017 07:28
Impressive, but:
Stockfish was handicapped.

It’s hash was limited to 1GB. Why??
No opening book?
1 min / move is artificial, usually at 40/40 the engine would take longer (5 minutes/ move) in the opening.
vishyvishy vishyvishy 12/7/2017 07:21
Ohh Boy!! Will they share all the 100 games played?? ... If they arrange and play 1000 games against top engines I am willing to "purchase"and see and Enjoy each and every move of those games
jsaldea12 jsaldea12 12/7/2017 07:20
• Dec. 7. 2017

Dr. Demis Hassalbis
Inventor Deep Mind Alpha Go

Sir:

I think it will be chicken pie. Please see if Alpha Go can solve this 30 years in the making chess puzzle of mine (how many minutes to solve). White mates black in 9 moves.
Position: White: Pa2, Ka3, Pa4, Ba7, Ba8, Pb5, Pc2, Pd6,Pe2, Nf3, Ng1, Pg5
Black Pc3. Kc4, Pc5, B-c8, N-d5,Pd7, Pe3, P-g3, P-f6, Pg7, Ph6

Regards.

Jose S. Aldea
Scientist-inventor
Bobbyfozz Bobbyfozz 12/7/2017 07:06
This was fun to read. If someone publishes a book on this, of some value, this would make the newspapers and TV outlets all over the world. Course what they publish will be nonsensical because hey, they butcher normal chess, but that won't deter them from sensationalism. Thanks for the article.
rubinsteinak rubinsteinak 12/7/2017 05:58
On a non-chess note, I do wonder how DeepMind proposes to translate the self-reinforcing learning algorithm to open-ended problems. Let's take weather, for example. To follow the isolation method DM describes whereby no outside input is given except the "rules of the game," how does one apply this to the "game" of weather, for which we don't know all the rules? It seems in these cases, you do need to feed AlphaZero data, like wind speeds and directions, humidities, barometric pressures, etc., and then let it "generate" weather scenarios. Again, though, there is the open-ended problem that there is no way of know the "correct" answer, unless you fed it live data and then let it run it's projections and compare it to what happens in real life. I still say there is the problem of incomplete data input, because we don't really know all of the variable interactions and real-time data. So, to tie it back to chess, it would be like telling AlphaZero how some of the pieces move, but not the others; it knows there is this thing called a "knight," but it doesn't know how it moves. That's incomplete data.
okfine90 okfine90 12/7/2017 05:46
So it only learned the rules of the chess game(how pieces move etc) , and it could teach itself chess and become a monster in few hours!!!. Everybody feels intuitively that this will have a much larger implication, and Chess and Go are few cases to start with. So by providing Newton laws to it(just as we told it how chess pieces move), it could learn and create Einstein's General Relativity in few hours!!? (or perhaps a much powerful theory than that). That looks extremely challenging, but the direction is set now. Who knows AI can design a solution for time machine in few hours(after few years)!!.
celeje celeje 12/7/2017 05:42
@rubinsteinak: Yes, what you point out about the paper is very bad, and the authors need to be called out on it. If they said nothing at all, it'd already be sloppy. But they specifically say "components of a typical computer chess program, focusing specifically on Stockfish" include a " carefully tuned opening book" and an "endgame tablebase". If they then don't mention turning them off, that's highly misleading, regardless of how much effect it had on the score.

It's chessplayers' duty to make noise about this, because the public have no idea.
rubinsteinak rubinsteinak 12/7/2017 05:23
I suspect Deep Mind did run games against Stockfish with it's opening book and endgame tablebase turned on and the results weren't as convincing. Just a guess. I notice Deep Mind's paper doesn't mention the opening book for Stockfish was turned off, which, to me as a chessplayer, is a little deceptive. Just sayin'. Nevertheless, this is a tremendous breakthrough and breakaway from the brute force paradigm.
Bertman Bertman 12/7/2017 04:43
64-36 is a 102 Elo advantage.

https://www.fide.com/fide/handbook.html?id=197&view=article
fgkdjlkag fgkdjlkag 12/7/2017 04:17
@ Nezhmetdinov191919, stockfish is not 3389 without the opening book, so actually those ratings for AlphaZero are incorrect.
fgkdjlkag fgkdjlkag 12/7/2017 04:15
I was wondering if they would work on chess after the self learning alphaGo was made. It appears that they were.
In many of these published games AlphaZero had an advantage early on. Agree with @celeje, this is really not as impressive as it sounds. The real question was how good AlphaZero is compared to the top existing programs, not a pared down version because there was no opening book. But that would not been as impressive for AlphaZero, so they didn't test it.