The future is here – AlphaZero learns chess

by Albert Silver
12/6/2017 – Imagine this: you tell a computer system how the pieces move — nothing more. Then you tell it to learn to play the game. And a day later — yes, just 24 hours — it has figured it out to the level that beats the strongest programs in the world convincingly! DeepMind, the company that recently created the strongest Go program in the world, turned its attention to chess, and came up with this spectacular result.

ChessBase 14 Download ChessBase 14 Download

Everyone uses ChessBase, from the World Champion to the amateur next door. Start your personal success story with ChessBase 14 and enjoy your chess even more!


Along with the ChessBase 14 program you can access the Live Database of 8 million games, and receive three months of free ChesssBase Account Premium membership and all of our online apps! Have a look today!

More...

DeepMind and AlphaZero

About three years ago, DeepMind, a company owned by Google that specializes in AI development, turned its attention to the ancient game of Go. Go had been the one game that had eluded all computer efforts to become world class, and even up until the announcement was deemed a goal that would not be attained for another decade! This was how large the difference was. When a public challenge and match was organized against the legendary player Lee Sedol, a South Korean whose track record had him in the ranks of the greatest ever, everyone thought it would be an interesting spectacle, but a certain win by the human. The question wasn’t even whether the program AlphaGo would win or lose, but how much closer it was to the Holy Grail goal. The result was a crushing 4-1 victory, and a revolution in the Go world. In spite of a ton of second-guessing by the elite, who could not accept the loss, eventually they came to terms with the reality of AlphaGo, a machine that was among the very best, albeit not unbeatable. It had lost a game after all.AlphaGo logo

The saga did not end there. A year later a new updated version of AlphaGo was pitted against the world number one of Go, Ke Jie, a young Chinese whose genius is not without parallels to Magnus Carlsen in chess. At the age of just 16 he won his first world title and by the age of 17 was the clear world number one. That had been in 2015, and now at age 19, he was even stronger. The new match was held in China itself, and even Ke Jie knew he was most likely a serious underdog. There were no illusions anymore. He played superbly but still lost by a perfect 3-0, a testimony to the amazing capabilities of the new AI.

Many chess players and pundits had wondered how it would do in the noble game of chess. There were serious doubts on just how successful it might be. Go is a huge and long game with a 19x19 grid, in which all pieces are the same, and not one moves. Calculating ahead as in chess is an exercise in futility so pattern recognition is king. Chess is very different. There is no questioning the value of knowledge and pattern recognition in chess, but the royal game is supremely tactical and a lot of knowledge can be compensated for by simply outcalculating the opponent. This has been true not only of computer chess, but humans as well.

However, there were some very startling results in the last few months that need to be understood. DeepMind’s interest in Go did not end with that match against the number one. You might ask yourself what more there was to do after that? Beat him 20-0 and not just 3-0? No, of course not. However, the super Go program became an internal litmus test of a sorts. Its standard was unquestioned and quantified, so if one wanted to test a new self-learning AI, and how good it was, then throwing it at Go and seeing how it compared to the AlphaGo program would be a way to measure it.

A new AI was created called AlphaZero. It had several strikingly different changes. The first was that it was not shown tens of thousands of master games in Go to learn from, instead it was shown none. Not a single one. It was merely shown the rules, without any other information. The result was a shock. Within just three days its completely self-taught Go program was stronger than the version that had beat Lee Sedol, a result the previous AI had needed over a year to achieve. Within three weeks it was beating the strongest AlphaGo that had defeated Ke Jie. What is more: while the Lee Sedol version had used 48 highly specialized processors to create the program, this new version used only four!

Graph showing the relative evolution of AlphaZero : Source: DeepMind

AlphaZero learns Chess

Approaching chess might still seem unusual. After all, although DeepMind had already shown near revolutionary breakthroughs thanks to Go, that had been a game that had yet to be ‘solved’. Chess already had its Deep Blue 20 years ago, and today even a good smartphone can beat the world number one. What is there to prove exactly?

Garry Kasparov is seen chatting with Demis Hassabis, founder of DeepMind | Photo: Lennart Ootes

It needs to be remembered that Demis Hassabis, the founder of DeepMind has a profound chess connection of his own. He had been a chess prodigy in his own right, and at age 13 was the second highest rated player under 14 in the world, second only to Judit Polgar. He eventually left the chess track to pursue other things, like founding his own PC video game company at age 17, but the link is there. There was still a burning question on everyone’s mind: just how well would AlphaZero do if it was focused on chess? Would it just be very smart, but smashed by the number-crunching engines of today where a single ply is often the difference between winning or losing? Or would something special come of it?

Professor David Silver explains how AlphaZero was able to progress much quicker when it had to learn everything on its own as opposed to analzying large amounts of data. The efficiency of a principled algorithm was the most important factor.

A new paradigm 

On December 5 the DeepMind group published a new paper at the site of Cornell University called "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm", and the results were nothing short of staggering. AlphaZero had done more than just master the game, it had attained new heights in ways considered inconceivable. The test is in the pudding of course, so before going into some of the fascinating nitty-gritty details, let’s cut to the chase. It played a match against the latest and greatest version of Stockfish, and won by an incredible score of 64 : 36, and not only that, AlphaZero had zero losses (28 wins and 72 draws)!

Stockfish needs no introduction to ChessBase readers, but it's worth noting that the program was on a computer that was running nearly 900 times faster! Indeed, AlphaZero was calculating roughly 80 thousand positions per second, while Stockfish, running on a PC with 64 threads (likely a 32-core machine) was running at 70 million positions per second. To better understand how big a deficit that is, if another version of Stockfish were to run 900 times slower, this would be equivalent to roughly 8 moves less deep. How is this possible?

The paper "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" at Cornell University

The paper explains:

“AlphaZero compensates for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations – arguably a more “human-like” approach to search, as originally proposed by Shannon. Figure 2 shows the scalability of each player with respect to thinking time, measured on an Elo scale, relative to Stockfish or Elmo with 40ms thinking time. AlphaZero’s MCTS scaled more effectively with thinking time than either Stockfish or Elmo, calling into question the widely held belief that alpha-beta search is inherently superior in these domains.”

This diagram shows that the longer AlphaZero had to think, the more it improved compared to Stockfish

In other words, instead of a hybrid brute-force approach, which has been the core of chess engines today, it went in a completely different direction, opting for an extremely selective search that emulates how humans think. A top player may be able to outcalculate a weaker player in both consistency and depth, but it still remains a joke compared to what even the weakest computer programs are doing. It is the human’s sheer knowledge and ability to filter out so many moves that allows them to reach the standard they do. Remember that although Garry Kasparov lost to Deep Blue it is not clear at all that it was genuinely stronger than him even then, and this was despite reaching speeds of 200 million positions per second. If AlphaZero is really able to use its understanding to not only compensate 900 times fewer moves, but surpass them, then we are looking at a major paradigm shift.

How does it play?

Since AlphaZero did not benefit from any chess knowledge, which means no games or opening theory, it also means it had to discover opening theory on its own. And do recall that this is the result of only 24 hours of self-learning. The team produced fascinating graphs showing the openings it discovered as well as the ones it gradually rejected as it grew stronger!

Professor David Silver, lead scientist behind AlphaZero, explains how AlphaZero learned openings in Go, and gradually began to discard some in favor of others as it improved. The same is seen in chess.

In the diagram above, we can see that in the early games, AlphaZero was quite enthusiastic about playing the French Defense, but after two hours (this so humiliating) began to play it less and less.

The Caro-Kann fared a good deal better, and held a prime spot in AlphaZero's opening choices until it also gradually filtered it out. So what openings did AlphaZero actually like or choose by the end of its learning process? The English Opening and the Queen's Gambit!

The paper also came accompanied by ten games to share the results. It needs to be said that these are very different from the usual fare of engine games. If Karpov had been a chess engine, he might have been called AlphaZero. There is a relentless positional boa constrictor approach that is simply unheard of. Modern chess engines are focused on activity, and have special safeguards to avoid blocked positions as they have no understanding of them and often find themselves in a dead end before they realize it. AlphaZero has no such prejudices or issues, and seems to thrive on snuffing out the opponent’s play. It is singularly impressive, and what is astonishing is how it is able to also find tactics that the engines seem blind to.

 

In this position from Game 5 of the ten published, this position arose after move 20...Kh8. The completely disjointed array of Black’s pieces is striking, and AlphaZero came up with the fantastic 21.Bg5!! After analyzing it and the consequences, there is no question this is the killer move here, and while my laptop cannot produce 70 million positions per second, I gave it to Houdini 6.02 with 9 million positions per second. It analyzed it for one full hour and was unable to find 21.Bg5!!

A screenshot of Houdini 6.02 after an hour of analysis

Here is another little gem of a shot, in which AlphaZero had completely stymied Stockfish positionally, and now wraps it up with some nice tactics. Look at this incredible sequence in game nine:

 

Here AlphaZero played the breathtaking 30. Bxg6!! The threat is obviously 30...fxg6 31. Qxe6+, but how do you continue after the game's 30...Bxg5 31. Qxg5 fxg6?

 

Here AlphaZero continued with 32. f5!! and after 32...Rg8 33. Qh6 Qf7 34. f6 obtained a deadly bind, and worked it into a win 20 moves later. Time to get a thesaurus for all the references synonymous of 'amazing'. 

What lies ahead

So where does this leave chess, and what does it mean in general? This is a game-changer, a term that is so often used and abused, and there is no other way of describing it. Deep Blue was a breakthrough moment, but its result was thanks to highly specialized hardware whose purpose was to play chess, nothing else. If one had tried to make it play Go, for example, it would have never worked. This completely open-ended AI able to learn from the least amount of information and take this to levels hitherto never imagined is not a threat to ‘beat’ us at any number of activities, it is a promise to analyze problems such as disease, famine, and other problems in ways that might conceivably lead to genuine solutions.  

For chess, this will likely lead to genuinely breakthrough engines following in these footsteps. That is what happened in Go. For years and years, Go programs had been more or less stuck where they were, unable to make any meaningful advances, and then came along AlphaGo. It wasn't because AlphaGo offered some inspiration to 'try harder', it was because just as here, a paper was published detailing all the techniques and algorithms developed and used so that others might follow in their footsteps. And they did. Literally within a couple of months, new versions of top programs such as Crazy Stone, began offering updated engines with Deep Learning, which brought hundreds (plural) of Elo in improvement. This is no exaggeration.

Within a couple of months, the revolutionary techniques used to create AlphaGo began to appear in top PC programs of Go

The paper on chess offers similar information allowing anyone to do what they did. Obviously they won't have the benefit of the specialized TPUs, a processor designed especially for this deep learning training, but nor are they required to do so. It bears remembering that this was also done without the benefit of many of the specialized programming techniques and tricks in chess programming. Who is to say they cannot be combined for even greater results? Even the DeepMind team think it bears investigating:

"It is likely that some of these techniques could further improve the performance of AlphaZero; however, we have focused on a pure self-play reinforcement learning approach and leave these extensions for future research."

Replay the ten games between AlphaZero and Stockfish 8 (70 million NPS)

Links



Born in the US, he grew up in Paris, France, where he completed his Baccalaureat, and after college moved to Rio de Janeiro, Brazil. He had a peak rating of 2240 FIDE, and was a key designer of Chess Assistant 6. In 2010 he joined the ChessBase family as an editor and writer at ChessBase News. He is also a passionate photographer with work appearing in numerous publications.
Discussion and Feedback Join the public discussion or submit your feedback to the editors


Discuss

Rules for reader comments

 
 

Not registered yet? Register

Bov Bov 12/12/2017 04:18
@jsaldea12: I'm still not sure if you are just a troll or really mythomaniac.
Facts are: 1) your puzzle has no mate in 9 solution 2) the solution you gave is wrong and 3) mate in 9 are no big challenges for engines
TRM1361 TRM1361 12/12/2017 04:13
@Rasmonte 12/9/2017 11:29
Thanks
I didn't know that. An April Fools joke and it got me. I remember hearing about it long after (early 80s) that and saying "WTF? H4? I can beat that".
jsaldea12 jsaldea12 12/12/2017 01:32
Now, it is final. Alphaz cannot solved the puzzle White to mate black in 9 moves.:Position: White: Pa2, Ka3, Pa4, Ba7, Ba8, Pb5, Pc2, Pd6,Pe2, Nf3, Ng1, Pg5
Black Pc3. Kc4, Pc5, B-c8, N-d5,Pd7, Pe3, P-g3, P-f6, Pg7, Ph6. If it is mate in 4 or 5 fives, it would be chicken pie to Komodo, stockfish and to AlphaZ but when it a 9-mover and up puzzles complex, there is a limit to what these super-computers can perform. Although using algorithm principles, Alpha Z can reach up.
original sin original sin 12/11/2017 10:39
فک میکنم تمامی شاخه های شناخته شده و استاندارد و تغییر بده
pcst pcst 12/11/2017 01:31
หมากรุกเพื่อมิตรภาพ or Chess for friendship.
pcst pcst 12/11/2017 01:27
Chess play everday We play chess for 3 worlds don't worry AZ if you clever enoung or smart chess thinking somebody will come to play again you all the time don't shy to be play Vs everything that can play chess. Just do it for the correct position that you think it won or draw if you still lost it mean you still have bug in yours program you need to solve the problem until it Best clever Engine. Please Make you Aim for everything who realy like play chess.
Regards. We play for friendship not for Kill another another Idea people.
jsaldea12 jsaldea12 12/11/2017 12:00
I posted the puzzle to facebook of Demi Hassabis, early morning Dec. 7, and I posted the puzzle in en-chess, and chess 24, several times on same day, hoping for response from AlphaZ to mincement the puzzle, Then on Dec. 8, 2017, after 20 hours , more or less, no response. But maybe it would be worth trying: Let AlphaZ solves, BY ITSELF, the puzzle withjout human intervention. (I still like to think that human, the maker, prevails over machine.
jsaldea12 jsaldea12 12/10/2017 11:53
Just imagine, a general commanding his army to fight, and it does, But in AlphaZ, it is more than that, it obeys the command and EXECUTE IT BY ITSELF to perfection. This is what happened in that match with stockfish. Maybe Deep Mind, AlphaZ, can be commanded to fight these cancer cells (Google will not buy for $400 million? for Deep Mind) or how to probe deeper recesses of the universe.,etc. and know the secrets.
fgkdjlkag fgkdjlkag 12/10/2017 11:43
@jsaldea12, how do you know that the makers of AlphaZero saw your puzzle?

Another strange point about the paper - they only published games in which alphazero won, even though one-third of the total games was reportedly drawn by stockfish (despite lack of its opening book). For a scientific paper, they should have randomly selected games, to give an accurate representation of alphazero. But their chief aim seems to be to showoff their product and google (also reflected in the match conditions).
guilhermevoncalm guilhermevoncalm 12/10/2017 11:24
Incredible advance in the history of mankind. More than stepping on the moon. The way it is going all earth problems will be solved. Free exchange of ideas and money will be superfluous.
jsaldea12 jsaldea12 12/10/2017 11:23
About that chess puzzle, 9-mover, of mine, I like to thank GM Mark Erenburg, arbiter, World chess CUP, for having been patient and kind to prod and prod me many times until I perfected the puzzle..in time for the anouncement of AlphaZ demolition of stockfish. I was thinking AlphaZ would make a mincement of the puzzle. It did not happened. It is still likely that chess puzzles, 9-movers and up, are not yet programmed (commanded) to super-computers, like Komodo, stockfish, and even Alphaz.
APonti APonti 12/10/2017 06:15
I´m thinking of quiting chess. This is too humiliating - a program that in 40 days "learns" Go and beats the WCh... and after just 4 hours of "learning" chess beats one of the best chess programs...

Please make a program like that to search for the cure of cancer, for instance !
jwcb jwcb 12/10/2017 05:35
I think we need to be careful about drawing conclusions such as, "The English is the best opening." The 12 openings shown in Table 2 in the paper (https://arxiv.org/pdf/1712.01815.pdf) show the 12 openings most often played by humans, not those most often played by AlphaZero during its self-play training. Furthermore, look more closely at the graph for the English Opening in the table. After 8 hours of self-play, it was playing the English about 7 percent of the time. That means it was playing something else 93 percent of the time. We don't know what that was. The same story goes for the Queen's Gambit.
celeje celeje 12/10/2017 12:48
@fgkdjlkag, @tjallen: I'd guess they had to do this because of finite computer resources (to do with how games are represented). This makes me also think they probably did this only for the training phase. Of course, it'd be ridiculous if they did this for the match. This is another thing they need to clarify.
Aramantik Aramantik 12/9/2017 07:07
not to mention that SF played 8 games out of 10 with black :) not fair testing . Give it power book and make it play powerful openings and always give rematch with opposite color.
Aramantik Aramantik 12/9/2017 06:59
In my opinion. SF was not given a fair chance. It was forced to play bad openings with black and never even was given a chance to play same opening with opposite color.
Maybe Alpha Zero can play without book but SF and other chess engines play significantly weaker without strong books.
The least that should have been done is to allow SF to take revenge with opposite color in same opening. Not to mention that Sf 8 is old and new versions are available
fgkdjlkag fgkdjlkag 12/9/2017 04:30
@tjallen, that is a good point. With the 3-fold repetition and 50-move rule, I see no reason for chess games to be stopped early and declared draws.

Regarding the upper elo limit, it is primarily a factor of the game, as someone posted, but also of the machine/player.

@jsaldea12, talking about a "cure for cancer" is a bit misleading. Take another entity - coronary artery disease. It is known that it is related to diet, exercise, and stress (among many other factors). Is there a single "cure" for it? Cancer is a mutation of cells, which has a number of known risk factors as well, including diet, exercise, stress. Is there any more reason to think that there is a single cure for it (unlike the treatment of a bacteria or virus), and that it is going to be developed by google? Keep in mind these are profit-making corporations, and based on its past actions, its primary motivation is profit, not saving humankind.
celeje celeje 12/9/2017 11:34
@JactaEst: A recent computer chess tournament had Book Off and it seems the openings were still varied. Here's one thing you can try. Run a (e.g. Blitz) tournament of Stockfish 8 against itself, with Book Off, Ponder Off, & exact same settings for the all games. If it's completely deterministic, won't all games be identical? I bet that won't happen and maybe the openings will be varied too. If you can do this, please let us know the results.
Rasmonte Rasmonte 12/9/2017 11:29
@TRM1361
The story with 1. h4 was an april fool's hoax brought by Martin Gardner in his Mathematical Games coloumn in Scientific American in april 1975.
JactaEst JactaEst 12/9/2017 10:52
I'm not clear on this 'no opening book for Stockfish assertion'.
When I ran my Stockfish 8 last year it decided the best defence to d4 was the Ragozin and played it 100% of the time with the opening book off.
If an engine with no randomisation factor/opening book thinks for 60 seconds won't it always reply with exactly the same repsonse? The one it deems to be the best.
And yet in the ten games given Stockfish responded to d4 from AZ with both Nf6 and e6. Also noticeable is that AZ varied its replies to 1..Nf6 and 1..e6.
It's not surprising that a lot of the d4 games ended up in the Queen's Indian. Given a free reign, engines playing white seem to prefer offering black that option rather than the Nimzo.
We really need to see the PGN for all 100 games to see what was going on...
TRM1361 TRM1361 12/9/2017 07:33
So what is AlphaZero's favourite opening with white and black? English & Queen's Gambit were mentioned for white but the paper seems to have a very low opinion of black's choices.

Doesn't like Queen's Pawn, King's Indian, French, Sicilian. Liked the Caro-Kann up to hour 6 then abandoned it.

Did it invent a new one? I remember way back some computer had done this self play type of stuff and claimed "H4" as white to be the best :)
Robert Fowler Robert Fowler 12/9/2017 06:19
Alpha Zero's "Immortal Zugzwang Game" against Stockfish analyzed: https://youtu.be/lFXJWPhDsSY
pcst pcst 12/9/2017 05:49
Wait so long AZ. DON'T SHY TO BE PLAY ON TEST BUG MODE. I SHOW YOU SOME OF MY GAME ALREADY.
celeje celeje 12/9/2017 04:37
@wasmaster:
Also, please look through all the previous comments here. You'll see comments relevant to your claims (e.g. you'll see that you are plain wrong to claim that AZ performed 200 ELO stronger).
Masquer Masquer 12/9/2017 04:33
@wasmaster
Isn't it a false statement to claim that Elo difference was 200 Elo, when in fact it was just 100 Elo for the 100-game match? Just use an Elo calculator, it will tell you the correct difference.
celeje celeje 12/9/2017 04:30
@wasmaster:
Google (i.e. the same company behind AZ) boasted this year that their TPU is to be up to 30 times higher performance than a contemporary CPU. So going by their own words, 4 TPUs is equivalent to up to 120 CPUs.

They did not run Stockfish on 64 CPUs. They ran it on 64 threads, so it perhaps was 32 cores, which may have in turn been perhaps 8 CPUs (each quad-core).

It looks like AZ had a huge hardware advantage.
kaimiddleton kaimiddleton 12/9/2017 04:16
I would like to know, in precise terms, what the power usage of the Alpha-zero 4 TPU system is compared to the power usage of the 64 core system that Stockfish was running on. If those numbers are comparable then in general I don't have a problem with Alpha-zero having "superior hardware". (Quibbles about the 1GB of Stockfish hash aside.)

On the talkchess forum someone proposed the following, but I don't know if it's correct: "1 Google TPU is around 50W, basically you have them 4 and another Haswell to run actual MCTS on those so around 300W. SF's hardware was most probably two 32 core CPUs each at 150W, so around 300W also."
wasmaster wasmaster 12/9/2017 04:10
A few notes on AlphaGo Zero.
1) I've seen some false statements about the conditions of the match:

- According to the Deepmind paper, the "evaluation" (match vs. Stockfish) the hardware was pretty comparable for both contestants.
"...we used Stockfish version 8 (official Linux release) as a baseline program, using 64 CPU threads and a hash size of 1GB" and AlphaZero "... was executed on a single machine with 4 TPUs [Tensor processing units - think graphics cards on steroids]". The training, prior to the match, used many more TPUs (500?).

- Stockfish was not neutered by removing its opening book. AFAIK, it didn't use an ending table base, but not sure if that mattered.
- AlphaZero was "only" 200 ELO stronger than Stockfish. However, that misses the point. Chess almost certainly has an ELO limit: a rating at which a player can reliably achieve an "ideal" result (which is almost certainly a draw) against a perfect opponent. I'm guessing that the asymptote in rating for AlphaZero is partially due to Stockfish being close enough to this limit to be able to draw an appreciable number of games against any engine or even perfect play.
I don't have proof of this, but here's an analogy:
Time for a reset of ELO -- you and every other tournament player get your rating reset to 1500. You're going to play lots of games to reestablish your rating and everyone else's. Oh, and the game is Tic-tac-toe (aka noughts and crosses, Xs and Os, ...). What do you think the highest legitimate rating will be? 1550? 1520? 1501?
In Checkers/Draughts, this happened in the mid-1990s. The last man/machine match ended up +1/-0/=31 to the machine, and play was very close to optimal by both sides. Any machine would be unable to achieve a rating more than a few hundred points above a human playing almost error-free checkers. This is what I expect as engines get stronger -- machine vs. machine matches will have an increasing percentage of draws, and engines/AIs will not be able to achieve significantly better results with large differences in hardware and algorithms/AI. (Go probably has a MUCH higher ELO limit.)
jsaldea12 jsaldea12 12/9/2017 12:33
jsaldea12 jsaldea12 0 reply report x
AlphaGo is a breakthrough. Just like flat screen TV, LED, that now becomes a part of our lives. But it appears the potential of Alphago complete mastery of computerized algorithm with the sets and individual set of f rules, pattern, etc., being obeyed totally at speed of light, has opened a new dimension, as I said like flat screen TV, LED. In astronomy, for instance, it may be able to prove whether there exists black hole. To me, it does not exist, against the law of physics, the bigger the fire, the bigger the light. AlphaGo may be able to explain why gravity of earth is all attraction, NO REPULSION (see article in internet.). In medicine, it may be able to discover cure for cancer, virus, etc. These are just samples what AlphaGo complete mastery of algorithm can do. I congratulate Demis Hassabis for making this Nobel breakthrough.

Jose s. aldea dec. 9, 2017
Scientist-inventor
Masquer Masquer 12/9/2017 12:10
@tourthefarce
AlphaZero is 100 Elo points above the crippled SF8 version it played against, based on the score in the 100 game match.
tourthefarce tourthefarce 12/8/2017 09:56
Did anyone calculate AlphaZero's rating based on this match with Stockfish?
Catholic_Church Catholic_Church 12/8/2017 08:37
Some Youtube links for the game analysis between AlphaZero and Stockfish:

https://www.youtube.com/watch?v=_CJp5GMG5IA
https://www.youtube.com/watch?v=_hs5pbLnmHs
https://www.youtube.com/watch?v=0g9SlVdv1PY
https://www.youtube.com/watch?v=0jpZ0NaR9TM
https://www.youtube.com/watch?v=11bEzddnI5A
https://www.youtube.com/watch?v=11vMwWPUKmo
https://www.youtube.com/watch?v=1jwQlAsvygY
https://www.youtube.com/watch?v=2AtMghW0G8I
https://www.youtube.com/watch?v=3taHp9EC5Kw
https://www.youtube.com/watch?v=6BYpZR6aL4I
https://www.youtube.com/watch?v=7JzuiBaOzHk
https://www.youtube.com/watch?v=7-MborNxYWE
https://www.youtube.com/watch?v=akgalUq5vew
https://www.youtube.com/watch?v=aPEU3NXxI7s
https://www.youtube.com/watch?v=cHx-TBThDj4
https://www.youtube.com/watch?v=dY3VimPpngI
https://www.youtube.com/watch?v=fxjzmnHKt5I
https://www.youtube.com/watch?v=hb4WftADuWQ
https://www.youtube.com/watch?v=HcVuBssRzdA
https://www.youtube.com/watch?v=hhhwMo0IGes
https://www.youtube.com/watch?v=ieEQQ1f_ZQY
https://www.youtube.com/watch?v=jBmdbKqgX54
https://www.youtube.com/watch?v=JPEefHPZNr4
https://www.youtube.com/watch?v=JxZ91B40wSU
https://www.youtube.com/watch?v=KMBJcP17qdA
https://www.youtube.com/watch?v=lb3_eRNoH_w
https://www.youtube.com/watch?v=lFXJWPhDsSY
https://www.youtube.com/watch?v=M3B318ybLqA
https://www.youtube.com/watch?v=M8ihJzxbXxU
https://www.youtube.com/watch?v=MEpY7k_DhWA
https://www.youtube.com/watch?v=M-sT9u7bol0
https://www.youtube.com/watch?v=MTjZtHUDiZg
https://www.youtube.com/watch?v=M-z4cncU8Cs
https://www.youtube.com/watch?v=ouS_5D9JHgQ
https://www.youtube.com/watch?v=pBoqhVavWf0
https://www.youtube.com/watch?v=pcdpgn9OINs
https://www.youtube.com/watch?v=pXjps2twePs
https://www.youtube.com/watch?v=rby_XdnjL98
https://www.youtube.com/watch?v=sSuIYqZXUiU
https://www.youtube.com/watch?v=tEBz6RFO_g8
https://www.youtube.com/watch?v=thZzaS-noSo
https://www.youtube.com/watch?v=UcAfg9v_dDM
https://www.youtube.com/watch?v=uCIRGysTUKU
https://www.youtube.com/watch?v=UIz7BxLYTvM
https://www.youtube.com/watch?v=V28poVKT1-A
https://www.youtube.com/watch?v=vkVXeIZH8-I
https://www.youtube.com/watch?v=vSjYUgi2Vr8
https://www.youtube.com/watch?v=xmpWx7oWpvM
https://www.youtube.com/watch?v=Y4s59sVrwG4
https://www.youtube.com/watch?v=YSvVFF4jIyo
pcst pcst 12/8/2017 07:26
Example that I won stockfish 8 64 too. I think Alpha zero will move like Stockfish some of game. I like to test Bug mode with Alpha zero.
[Event ""]
[Site ""]
[Date "24/07/2017 8:26:43"]
[Round "1"]
[White "Human"]
[Black "Stockfish 8 64"]
[Opening "Petrov's defence"]
[Eco "C42"]
[TimeControl "0.5+0 (Min.+Inc.)"]
[Result "1-0"]

{[%clk 0:00:30] [%clk 0:00:30] } 1. e4 {[%clk 0:00:30] } e5 {[%clk 0:00:27]
} 2. Nf3 {[%clk 0:00:29] } Nf6 {[%clk 0:00:26] } 3. d3 {[%clk 0:00:28]
} Nc6 {[%clk 0:00:25] } 4. Bd2 {[%clk 0:00:28] } d5 {[%clk 0:00:24] } 5.
Nc3 {[%clk 0:00:27] } d4 {[%clk 0:00:23] } 6. Ne2 {[%clk 0:00:26] } Bc5
{[%clk 0:00:22] } 7. h3 {[%clk 0:00:26] } O-O {[%clk 0:00:22] } 8. Qc1
{[%clk 0:00:25] } Be6 {[%clk 0:00:21] } 9. Ng3 {[%clk 0:00:24] } a5 {[%clk
0:00:20] } 10. Be2 {[%clk 0:00:24] } a4 {[%clk 0:00:19] } 11. Bh6 {[%clk
0:00:21] } gxh6 {[%clk 0:00:19] } 12. Qxh6 {[%clk 0:00:21] } Bb4+ {[%clk
0:00:18] } 13. c3 {[%clk 0:00:19] } dxc3 {[%clk 0:00:18] } 14. O-O {[%clk
0:00:18] } cxb2 {[%clk 0:00:17] } 15. Ng5 {[%clk 0:00:17] } bxa1=Q {[%clk
0:00:16] } 16. Nh5 {[%clk 0:00:17] } Qxf1+ {[%clk 0:00:15] } 17. Bxf1 {[%clk
0:00:16] } a3 {[%clk 0:00:14] } 18. Qg7# {[%clk 0:00:15] } 1-0
dysanfel dysanfel 12/8/2017 06:44
After 31.Qxc7 it looks like Stockfish is better. I cannot believe that AlphaZero won that position. It is uncanny.
mrburns123 mrburns123 12/8/2017 06:39
@sgbowcaster On page 4 of their paper you can see, that AlphaZero didn't make much progress anymore after 200k out of 700k learning steps: https://arxiv.org/pdf/1712.01815.pdf
phenomenonly phenomenonly 12/8/2017 06:03
Incredible impressive ... . As well as the game shown in the first diagramm with 21.Lg5!! ... . For those who are interested in this game and it's most interesting variations, they can find an analysis on http://www.sklauffen.de/wordpress/wp-content/uploads/2017/12/2017-12-04-AlphaZero-Stockfish8.htm .
Quitch Quitch 12/8/2017 06:02
sgbowcaster, because they aren't interested in Chess so much as proving how effective their algorithm is. It took ~4 hours for a AlphaZero go from learning Chess from scratch to being able to beat the best engine that over a thousand years of human knowledge of the game could create. Chess is just a ruleset to them to demonstrate their neural net.
KingZor KingZor 12/8/2017 05:43
Igo Freiberger, I stand corrected. The whole report is rather slapdash. An amazing accomplishment, but lousy reporting.
sgbowcaster sgbowcaster 12/8/2017 05:03
Why didn't they let AlphaZero learn for 3 or 4 days instead of a few hours and then write a paper on that?????
Bov Bov 12/8/2017 01:51
@jsaldea12
I assume the position is wrong and the Knight has to stand somewhere else than in d5 ?!? Else please give the solution after 2.Nxb6