Inside the (deep) mind of AlphaZero

by Albert Silver
12/7/2018 – It was a long time coming, but the wait is over. After nearly a full year, being ping-ponged from one peer reviewer to the next, the final paper on AlphaZero is out, shedding light on a number of hitherto unknown or misunderstood elements in its construction, not to mention some clarifications and corrections. These include sample code to help implement their work and all the games of the match against Stockfish, of which 20 were specially chosen by GM Matthew Sadler. | Graphic: Deep Mind

Strategy University Vol. 4: The technique of realising the win Strategy University Vol. 4: The technique of realising the win

Great players of the past used to say – the most difficult thing in chess is to win won positions! Every player has such problems – those at the top of the tree and (especially) juniors. The correct technique consists of proper exchange methods and of the continuation of a correctly chosen plan; it is important not to change strategy after a small material gain. The DVD shows and explains instructive mistakes made when trying to make extra material or a positional advantage count and in addition it demonstrates the correct techniques as employed in classic games.

More...

Full AlphaZero paper is published

When AlphaZero was first announced late last year, it is not an understatement to say it caused feelings of shock and awe. After all, a new paradigm had been ushered into the somewhat stodgy world of computer chess, challenging decades of accepted truths and promising wondrous things for players all around the world.

Here was a program that eschewed conventional wisdom on how one should be built, challenging even that most basic premise: faster is better. Not only did it not run remotely as fast as Stockfish, the standard it was tested against, but it was a good 900 times slower, yet still stronger by some margin.

Accompanying this eye-opening news was a tantalising pre-paper that shared many of its intimate details to those who could understand it, and were willing to work to implement it. Still, there were many who cried foul, screaming that not only had the test match been grossly unfair as AlphaZero ran on a ‘supercomputer’ while Stockfish did not, but that Stockfish had been nothing short of crippled.

AlphaZero: Shedding new light on the grand games of chess, shogi and Go 

Match conditions

The final paper, published in Science magazine, a serious journal that will demand the utmost scrutiny and peer reviews before accepting a paper, has brought in a number of rectifications regarding the match conditions as well as clarifications on the hardware. In the pre-paper, the hardware ascribed to Stockfish had been 64 threads generating 70 million positions per second, and 32MB (megabytes) for hash tables. That last detail caused no shortage of cries of outrage, since such a minuscule amount could barely benefit it. Then there was the matter of the 100-game match at one minute per move, and finally, last but not least, there were the mysterious four TPUs that AlphaZero was running on. While many today might appreciate what a strong GPU brings to the table, a TPU is hard to quantify.

The final paper brings a number of changes, which make it unclear whether this was as stated, or whether it was misreported. Whatever the case, the games shared at the Deep Mind website are different from those in the pre-paper, and while there is no shortage of brilliancies (that is unchanged), they are different brilliancies. 

In this final paper, the match was not only rerun, with roughly the same result (+104 Elo performance), but had much better conditions for Stockfish to put the complaints to rest of it being crippled to rest. This time Stockfish was running on 44 threads on 44 cores (two 2.2GHz Intel Xeon Broadwell CPUs with 22 cores), a hash size of 32GB, Syzygy endgame tablebases, at 3-hour time controls with 15 additional seconds per move. Furthermore, Stockfish 8 was not the only version tested, Stockfish 9 was given its chance as well. The relative difference in nodes per second was maintained, for roughly 900-1, so that much was not changed. The authors also measured the overall average nodes per second for each player, instead of just the start position, which had been the case in the pre-paper. All in all, they report on the total results of 1000 games, though only 210 are actually published at the website.

As to AlphaZero and its first generation TPUs, the authors help narrow down its strength by explaining that while not the same, the inference performance is equivalent to a Titan V. The Titan V is without question a superb professional grade GPU, but its performance is nearly identical to that of the newly released Nvidia RTX 2080 Ti, a $1200 GPU. Powerful? Without question, but hardly a supercomputer unless comparing to machines from years back.


Furthermore, the authors tested a variety of conditions, and not just without books. They tried allowing Stockfish to use a book while AlphaZero did not, and even a TCEC-style match using the exact same openings TCEC used in a superfinal a couple of years back, as well as time handicap matches with AlphaZero getting one third the time Stockfish got or even one-tenth. Have you wanted to know how AlphaZero would have fared in the TCEC superfinal against Stockfish? Here is the result.

More importantly, all the games for these matches have been released — over 200 games, including a fine selection by Sadler who took the liberty of choosing those he felt were not to be missed.

The article brought much more detailed explanations as well as graphs to help understand

Shogi fans were not overlooked either. Not only were the 100 games between the Shogi version of AlphaZero published, but ten were chosen by Yoshiharu Habu, who is the 'Kasparov' of Shogi.

One knowledgeable aficionado who went over them was flabbergasted. As he explained, “I've been looking at some of the shogi games...and they are utterly impenetrable. All known joseki (openings) and king-safety principles are thrown out the window! In some of these games, the king doesn't just sit undeveloped in the center but does the chess equivalent of heading out to the middle of the board in the middle game before coming back to the corner for safety and then winning. Astounding!”

In the Science publication where the AlphaZero paper appears, additional commentary was provided by luminaries such as Murray Campbell, a leader in AI research and one of the key names behind Deep Blue, as well as an editorial by Garry Kasparov, who gave his own perspective on it, noting:

(...) I admit that I was pleased to see that AlphaZero had a dynamic, open style like my own. The conventional wisdom was that machines would approach perfection with endless dry maneuvering, usually leading to drawn games. But in my observation, AlphaZero prioritizes piece activity over material, preferring positions that to my eye looked risky and aggressive. Programs usually reflect priorities and prejudices of programmers, but because AlphaZero programs itself, I would say that its style reflects the truth. This superior understanding allowed it to outclass the world's top traditional program despite calculating far fewer positions per second. It's the embodiment of the cliché, 'work smarter, not harder'.

AlphaZero shows us that machines can be the experts, not merely expert tools. Explainability is still an issue — it's not going to put chess coaches out of business just yet. But the knowledge it generates is information we can all learn from.

Be sure to read the entire editorial.

Openings

In the pre-paper, numerous fascinating graphs had been published on the opening preferences of AlphaZero as it evolved, as well as its results in test matches against Stockfish. This time the statistics are shared more in a visual manner with colour bars to help see when it won more or lost.

There is also a fascinating breakdown of its favourite 6-ply sequence in self-play as it evolved. In other words, what would it play as the best opening for both sides for six plies. AlphaZero was trained for a total of 700 thousand steps (think of these as lessons in its evolution), and here we can see what it thought was ideal after just 50 thousand steps, then 143 thousand steps, and so forth until its pinnacle of opening play… get ready to grimace: the Berlin.

The Berlin as the logical evolution of theory?

Some might see the Berlin as the final word by AlphaZero on openings as a sign of regression. After all, after 608 thousand steps, it thought the classic Ruy Lopez was ideal.

What we learned

For developers and programmers, this was a godsend as it finally put a large number of questions to rest regarding parameters used in training and playing, as well as some truly eye-opening revelations. For those wondering about the exact implementations, Deep Mind has provided sample pseudocode as they call it, enough to show how some of the algorithms might be coded. Among the more exciting items on a technical level was a formula that had the base of the search change according to the number of nodes per move it reached. The deeper it looked, the wider the search became.

So does this wrap up AlphaZero for good now? Hardly. As Demis Hassabis was so ready to point out recently, a new AlphaZero has been developed that is stronger than the one referenced in the paper. Be ready for new announcements!


GM King analysis

Grandmaster Daniel King analyses several of the new games from AlphaZero for his PowerPlay Show.


Replay all AlphaZero's games

 

Endgame Turbo 5 USB flash drive

Perfect endgame analysis and a huge increase in engine performance: Get it with the new Endgame Turbo 5! This brings the full 6-piece Syzygy endgame tablebases on a pendrive. Just plug it in a USB socket and you are set!


Links


Born in the US, he grew up in Paris, France, where he completed his Baccalaureat, and after college moved to Rio de Janeiro, Brazil. He had a peak rating of 2240 FIDE, and was a key designer of Chess Assistant 6. In 2010 he joined the ChessBase family as an editor and writer at ChessBase News. He is also a passionate photographer with work appearing in numerous publications, and the content creator of the YouTube channel, Chess & Tech.

Discuss

Rules for reader comments

 
 

Not registered yet? Register

jsaldea12 jsaldea12 12/15/2018 01:01
With due respect to Dr. Dennis Hassabis, please see if the following is possible (a) or is true (b):

(a) Can self-searching Alphazero search and recognize if the visible light that envelops all over Milky Way originates from the center of Milky Way. Light and gravity have both positive and negative. Alphazero operates on the same positive and negative. Permit undersigned to propose: Feed the actual astronomical telescope photos of the Milky Way to Alphazero hard ware (brain) and simulate and command the positive and negative of alphazero to recognize the positive and negative of the thick bright lights encompassing all over Milky Way and to trace the source of the light. If the light originates from the center of Milky Way , then it proves light escapes , there is no black hole.



(b) Why stockfish lose to Alphaqzero. It appears like this, subject to correction by Dr, Dennis Hassabis : Alphazero is programmed//commanded to self-analyze some 80,000 moves a second from each position as played at that TIME on the board. It means that every position is considered new by Alphazero but is subject to the same programming//command. Then Alphazero selects the best move thereof. While stockfish has storage of moves per second, by the millions, all activated all the time chess is played. But not all positions are on the stated storage millions of stockfish because there are billions of variant moves in just the first 10 moves, thus Stockfish is lost on positions not in its storage. Both Alphazero and Stockfish operate on positive and negative but the programming and command differentiate the positive and negative from both alphazero and stockfish...

Jose S. Aldea
Physicist
December 15, 2018
jsaldea12 jsaldea12 12/13/2018 02:25
It appears like this, subject to correction by Dr, Dennis Hassabis : Alphazero selfanalyzes some 80,000 moves a second from every position as presented and played at that TIME on the board and Alphazero self-mInd selects the best move thereof. And while stockfish has storage of moves per second, by the millions, not all positions presented on the board as played at the time are on the stated storage, as downloaded, by the millions.. And to think that there are billions of variations of moves, the first 10 moves .
Petrarlsen Petrarlsen 12/13/2018 02:29
@ celeje:

And what is your opinion about an AlphaZero / Stockfish match? Which measures should be taken about hardware for this match?
jsaldea12 jsaldea12 12/13/2018 12:32
Continuity: Can self-searching alphazero search if black hole exists in the center of Milky Way?

It appears like this: simulate/feed the actual pictures of the Milky Way to alphazero hard disk (brain) and command alphazero to search if the greatest light of Milky Way comes from the center of Milky Way or not. If it originates from the center, then it proves light escapes , there is no black hole.

Jose S. Aldea
Physicist
Dec. 13.2018
celeje celeje 12/12/2018 10:45
fgkdjlkag: "@TRM1361, the < 24 hours is very misleading as was mentioned by GM Anand. It is dependent on extreme programming power. If alphazero was not running on such hardware, it could have taken months or years."

On the computer I'm typing this on, it'd take more than 1000 years.
celeje celeje 12/12/2018 10:43
@ Petrarlsen:

If both are run on multiCPUs, A0 would be wiped out.

But A0 fans would complain that it's "not meant to be run on CPUs".

So it's really a hardware achievement: TPUs/GPUs.
fgkdjlkag fgkdjlkag 12/12/2018 12:43
@TRM1361, the < 24 hours is very misleading as was mentioned by GM Anand. It is dependent on extreme programming power. If alphazero was not running on such hardware, it could have taken months or years. If you had a billion dollars lying around, you could do things with a computer in < 24 hours as well.

@jsaldea12, you mention that "brute force" is over, but no strong engine has relied on brute force in a very long time, the reliance on brute force (and the number of possibilities in chess) was why no top chess-playing program was ever able to defeat a top human player until they incorporated advanced heuristics, evaluation functions, pruning, and so on. Contrariwise, alphazero has an incredible search depth so you might as well say that it is based on brute force. Look at how far MCTS (monte carlo tree search) would get you in a game like chess or go without extensive brute force.

Unless stockfish can be configured to run on TPUs, I don't think it is possible to have equivalent hardware, but it is an essential point to determining the true strength of alphazero as Petrarlsen pointed out.
Petrarlsen Petrarlsen 12/12/2018 12:12
@ BrianOber:

"Hate to keep using the Formula One analogy but driver x car = race performance. Very difficult to remove the machinery from the equation and evaluate a driver or chess program by itself."

But, precisely, so as to be able to better evaluate drivers, Formula 2 (the last step before Formula 1) uses the same car for all the pilots; as Wikipedia puts it (https://en.wikipedia.org/wiki/FIA_Formula_2_Championship): "(...) Formula 2 has made it mandatory for all of the teams to use the same chassis, engine and tyre supplier so that true driver ability is reflected."

So it would seem logical, in my opinion, to do the same with AlphaZero: to use the same hardware (or as nearly as possible) for AlphaZero and Stockfish, so as to remove as much as possible the hardware from the comparison (as the question, here, isn't the proficiency of the Google's computers - which, quite certainly, is excellent... -, but to know if AlphaZero is really as groundbreaking as it is supposed to be).
BrianOber BrianOber 12/11/2018 11:29
@Petrarlsen

“But the problem is: what would you suggest if Stockfish's team wouldn't succeed in partnering with a big computer company? Because, then, the difference between the two computers would certainly be enormous...”

I don’t know. We’ve come to the crux of the matter. Software x hardware = chess proficiency. Hate to keep using the Formula One analogy but driver x car = race performance. Very difficult to remove the machinery from the equation and evaluate a driver or chess program by itself. So if you can’t have the hardware as a competitive element in some super computer chess championship, it probably isn’t worth doing. Which is what is happening now.
jsaldea12 jsaldea12 12/11/2018 11:11
jsaldea12 Just now
Deep-Mind Alphazero potentialities:

It is like deep mind alphazero has just seen a tree while behind lies the virgin forest. By releasing its secret programming, coding to the public, now many computer experts are downloading with their PCs, laptops. This new self-learning, alphazero has opened a door to this entirely NEW unexplored, unlimited forest for exploration and utilization for the benefit of mankind, re- chess is just one, in medicine (Dr. Hassabis, author od alphazero, mentions cancer), in science ( is black hole real, global warming and climate change).The deep-mind Alphazero has just tap the tip of the iceberg. But the potential is too enormous. Because it has proven its capability to self-think, self-move by itself, without human interference, it can break through in other more important fields in medicines, science.

Google would not spent $500million to buy deep-mind alphazero just for self-solving by itself chess, ,Go. It is meant for something much important. So far it has not broken through but in time, it will.

.Jose Aldea
Physicist
12.11, 2018
bbrodinskybbrodinsky 12/8/2018 03:43
To all the people defending the recent boring WCC match(es) by saying "they're not boring, they're playing super-accurate computer-like chess, and that by nature is boring and drawish".

Well, let me introduce you to alphazero, which sacs pieces, plays what looks like reckless chess. Oh, it would probably beat Carlsen and Caruana in 12 game matches by something close to 24-0.

Thus removes THAT excuse for the last many dreadful WCC matches. There needs to be a change in the format, some incentive for the players to play real chess, not the safe-play-for-the-rapids non-sense stuff. Maybe we should threaten them with alphazero!

This is such a great article. Brute force has taken a beating! We can only hope that alphazero technology leads to breakthro
Virtuoso Virtuoso 12/11/2018 05:53
how did i get here?
TRM1361 TRM1361 12/11/2018 05:47
In all the arguments on here I think the biggest point is missing. AlphaZero got to this level from just the rules. No opening books, no midgame tactics and no endgame maps. That in and of itself is groundbreaking.

Along the way it re-discovered openings that humans have worked on for centuries and did so to a depth at least equal to human analysis.

It did all that in less than 24 hours.
Petrarlsen Petrarlsen 12/10/2018 03:50
@ BrianOber:

"Google brings all the computers it likes to run AlphaZero. But in turn, Stockfish partners up with another technology company like IBM or Amazon to provide its best supercomputers."

Obviously, in this case, the two would stay more or less comparable, so this wouldn't be much of a problem (even if, to answer precisely the question of the level difference between AlphaZero and the best "classical" programs, it would be better to have completely comparable hardware).

But the problem is: what would you suggest if Stockfish's team wouldn't succeed in partnering with a big computer company? Because, then, the difference between the two computers would certainly be enormous...
jsaldea12 jsaldea12 12/10/2018 01:45
Continuity on Deep-mind Alphazero potentialities:

I see, It is like deep mind alphazero has just sees a tree while behind lies the virgin forest. By releasing its secret programming, coding to the public, now many computer experts are downloading with their PCs, laptops. This new self-learning, alphazero has opened a door to this entirely NEW unexplored, unlimited forest for exploration and utilization for the benefit of mankind, re- chess is just one, in medicine (Dr. Hassabis mentions cancer), in science ( is black hole real, global warming and climate change).The $500 million deep-mind Alphazero has just tap the tip of the iceberg.

.Jose Aldea
Physicist
12.10, 2018
BrianOber BrianOber 12/9/2018 11:56
Additional thoughts on my call for a one-time computer chess spectacular:

1. Anything goes: Google brings all the computers it likes to run AlphaZero. But in turn, Stockfish partners up with another technology company like IBM or Amazon to provide its best supercomputers. It would be similar to the Formula One driver-constructor arrangement. Use any opening or endgame database you like.

2. The match time controls should be a bit faster than human classical to provide more pace to the action and precipitate increased interest in following the games live with commentary. Probably something closer to rapid would be optimum. Also this would allow for more games in the presentation. I think something like 2-4 games per day over 12 days would be about right.

Anyone else have any good ideas?
fgkdjlkag fgkdjlkag 12/9/2018 11:38
@Petrarlsen, yes, what a surprise.
@klklkl, Deepmind did not inspire confidence by handicapping stockfish in the first match. The mistake might have been routine for a non-chess team, but Demis Hassabis is a very strong chess master, who has also kept up to date on top-level chess events. In the 2nd match, there was NO reason for stockfish to play without its opening book. All it does is inflate the numbers in favor of alphazero. Did alphazero play any games against stockfish without alphazero's millions of self-play games and self-training? No? Why not? Shouldn't Deepmind have released the results of alphazero games before it did its training?

"Criticisms of a research team for not carrying out their experiments in public is idle."
This is a false statement because research teams are routinely criticized for experiments not being reproducible and if details of the experiment are obfuscated or kept secret. But your statement is not germane because Deepmind's primary aim is not research for the general advancement of knowledge, but to maximize profit.
jsaldea12 jsaldea12 12/9/2018 03:22
"When there are patterns of cause-and-effect between certain kinds of behavior and certain undesirable results, such as the relationship between smoking and lung cancer, self- teaching algorithms can be of great benefit. The key to this benefit is the presence of predictable causation" Hassabis..
jsaldea12 jsaldea12 12/9/2018 08:30
To the respected authors of Alphazero:

This is a follow up: Can the self-thinking, self teaching, neural Alphazero be fed with all the data, codes needed and be commanded to investigate and verify if there is truth to the concept of black hole that even light cannot escape, thus, is invisible, located in Sagittarius A in the center of Milky Way galaxy. Dr. Stephen Hawkings and the whole scientific community of the world claim it is real , by the millions, more or less, in whole wide outer space. But Dr. Einstein, the author of black hole, until his death later disowned its existence (including undersigned). Last November of year 2017, a super-conglomerate of all giant astronomical telescopes of the world combined together and peep simultaneously, at Sagittarius A. But up to now, no result is released. The conglomerate said it will released the result by February, 2018. Maybe Alphazero can give a helping hand!!
Keshava Keshava 12/9/2018 07:22
@klklkl,
Perhaps it is not about Stockfish but that some people have a different opinion about what a fact is.
I agree with your comments that "chess isn't the purpose of this research, rather the creation of a general purpose AI." but they do seem to care what the chess world thinks - otherwise why would they spend man-hours selecting certain games for release? Also, isn't it reasonable that the way they trumpet their results would generate feedback from people that ARE interested in chess and also the development of chess playing engines? Since the primary reason for a public corporation to be interested in AI or anything else is profit for their shareholders then I think it is perfectly reasonable for people interested in chess or chess engines to be skeptical about their claims and express annoyance at people who consider the results of private tests "facts". If the problem is "the tribal fanatics to whom Stockfish is a creed not a tool" then why don't those people express equal skepticism when Stockfish loses a game or a match in TCEC? I think the answer is obvious.
Petrarlsen Petrarlsen 12/9/2018 04:11
@ klklkl:

AlphaZero is at the frontier of artificial intelligence and chess; the artificial intelligence specialists sees it as a tool for developing artificial intelligence, and chess players and the chess public as the (possible) ultimate and revolutionary chess program.

And in the world of chess, to know if a player is better than another, the custom is to have competitions; thus it seems logical to me that the chess public should want to see a public match between AlphaZero and Stockfish (it could be also a quandrangular tournament against Stockfish, Houdini and Komodo).

Furthermore, it would certainly be a very interesting match, which would be much followed by the chess public, in my opinion...
klklkl klklkl 12/9/2018 03:45
The resistance of readers to AlphaZero's emergence charms me with every outburst. But their determination to obfuscate clearly reported facts is less endearing.

This match was played in January, to address fair criticisms made last year over methodology. That match against Stockfish 8 was replayed, and complemented with games against latest Stockfish development build as of January, including with Stockfish using an opening book, and from TCEC starting tableaux (arguably hampering A0 by forcing it to play openings it rejected during its training phase). There were also additional games played with significant time advantages for SF. All this has been widely reported. Yet reading the comments here you'd imagine Google had gone out of their way to rig both hardware and software against Stockfish.

Criticisms of a research team for not carrying out their experiments in public is idle. The results of this match are obviously interesting from a chess perspective, but it ought to be remembered chess isn't the purpose of this research, rather the creation of a general purpose AI. It's encouraging to see that A0 is still under development (Demis' mention of a stronger more recent version). With their paper published, a public match isn't altogether impossible to foresee. Though I really doubt anything could satisfy the tribal fanatics to whom Stockfish is a creed not a tool.
Petrarlsen Petrarlsen 12/8/2018 10:50
@ fgkdjlkag:

"I agree with, and am heartened by, many of the comments below, that Deepmind needs to have a public match. Why would stockfish ever be forced to play a game without its opening book? Alphazero needs to play the strongest computer without a handicap and we see how good it is."

We don't frequently agree, but, on with this, I fully agree!...

And a public match between AlphaZero and Stockfish "full-force" would attract enormous public interest, in my opinion...

@ BrianOber:

"The time has come for a true classical computer world chess championship. Same rules and time controls as the human championship. Bring any computers you like. Anything goes."

"Bring any computers you like": I think it would be much preferable to ensure that AlphaZero and Stockfish would run on comparable computers, because, as DeepMind is owned by Google, their computer would very probably be much stronger than the computer on which Stockfish would run... And we don't want to know if Google's super-computers are REALLY good (which they certainly are...), but if AlphaZero is, or not, better than the best "classical" programs...
CID64 CID64 12/8/2018 08:07
"AlphaZero was trained for a total of 700 thousand steps (think of these as lessons in its evolution), and here we can see what it thought was ideal after just 50 thousand steps, then 143 thousand steps, and so forth until its pinnacle of opening play… get ready to grimace: the Berlin."
This means AlphaZero think 1.e4 is the best ?
Keshava Keshava 12/8/2018 01:06
Trumpeting your own private lab tests when no one has been able to duplicate them is no different than the claims about 'cold fusion' years ago. This corporation could easily afford to arrange for independent testing and their claims should not be taken seriously until they do.

re: https://en.wikipedia.org/wiki/Cold_fusion
okfine90 okfine90 12/8/2018 09:56
Computer scientists today are floating in the topmost layer of human cognition system, and they are the leaders. Thanks to Mathematics which is widely accepted as the greatest tool in human civilization, for reasoning and solving problems. I am sure chess players with computer science background(or passionate about it) will love reading the AlphaZero paper, instead of just watching AlphaZero vs Stockfish games.
fgkdjlkag fgkdjlkag 12/8/2018 05:51
I agree with, and am heartened by, many of the comments below, that Deepmind needs to have a public match. Why would stockfish ever be forced to play a game without its opening book? Alphazero needs to play the strongest computer without a handicap and we see how good it is.
fgkdjlkag fgkdjlkag 12/8/2018 05:43
@badibadibadi, it is already happening. The public managed to find out earlier this year that Google, the parent company of Deepmind, was working with the US military on its drone strike program. Only after a massive public outcry and the public resignation of many Google employees, did Google announce that they will end the partnership at some time in 2019.
jsaldea12 jsaldea12 12/8/2018 03:57
That self-learning alphazero has bright future in medicine, cancer, in science the recesses of the unverse, the origin of the universe, in religion, sould, eetc.
Nezhmetdinov1919 Nezhmetdinov1919 12/8/2018 03:48
Good reading!
jsaldea12 jsaldea12 12/8/2018 02:06
Continuity: Dr. Stephen hawkings and too many astrophysicists contend black holes exist. As a matter of fact, last year, 2017, a mega telesscope, a conglomerate of giants astronomical telescopes of the world were used, pointed toward the center of milky way to detect the giant black hole of milky way. What happened?
jsaldea12 jsaldea12 12/8/2018 01:48
Can Alphazero unravels the mysteries of the universe?

Other than playing toys like chess, self-teaching Alphazero can be programmed/fed with data that can unravel the mysteries of the universe, like whether invisible black holes are real or not. I contend concept of such black holes, that even light cannot escape, is against the law of physics which is common sense: the bigger the fire (center of galaxy has invisible black hole?), the brighter it glows. In short, there is no black holes.
Jsaldea12
Physicist
Schnabelwolke Schnabelwolke 12/7/2018 08:01
AlphaZero no doubt plays impressive chess but IHMO still is not quite this super hero all those semi-informed people seem to take for granted. Can please someone setup a proper match between Stockfish 8 (with settings like they used in this match, so no opening book, no tablebases) against the best Stockfish currently available so that everyone can see who fares better, AlphaZero or this Stockfish. Then we can start talking.
BrianOber BrianOber 12/7/2018 05:46
The time has come for a true classical computer world chess championship. Same rules and time controls as the human championship. Bring any computers you like. Anything goes. Might very well be more interesting than the human version -- at least if AlphaZero is one of the participants!
hariharansivaji9 hariharansivaji9 12/7/2018 05:31
As per me if AlphaZero is superior, they should arrange a full fledged match with Stockfish in public. Why they keep on publishing a lab test article, no on knows what is happening inside the lab.
Green22 Green22 12/7/2018 05:13
Garry gotta toot his own horn always and every time he's in the lime light. LOL this guy kills me. Must have been brutal growing up with him as a kid then an adult. " I admit that I was pleased to see that AlphaZero had a dynamic, open style like my own"
aji2017 aji2017 12/7/2018 03:46
i replayed each games, and I don't see Stockfish goes into that line played with AlphaZero. For example, in game 1, in move 8, a much better move is 8.Nc4! in the book and it is the no.1 choice, and Stockfish found that move 8.Nc4 as its no.1 choice, with no.2 choice is 8.Nb3. How in the hell did Stockfish move 8.Qe1 even if I continuously run it for 10 minutes, Stockfish coudn't find 8.Qe1 as it played in the game. I am using CB15. I already check too many lines in other games and found the same result. Also Stockfish can find in a matter of seconds all amazing moves done by AlphaZero. I don't want to think this is a fraud. Or should I say this classified info attract more investors? Maybe the AlphaZero and Stockfish should play again in open public and not in an undisclosed facility, and this time with fair rules to follow. Then let's see the performance of AlphaZero. And if AI program is worth investing.
Bertman Bertman 12/7/2018 03:04
In the final paper, the authors mentioned they had now measured the speed of both players differently, and it had been assumed, mistakenly, the games were the same, just clarified. However, reviewing them all it is clear the games are not the same, so the article has been revised accordingly. Nevertheless, note that the speed difference is roughly the same as in the pre-paper (Stockfish is computing about 900 times more positions per second than AlphaZero), as is the Elo performance of the 100 games played without an opening book, from the start position.
Mr TambourineMan Mr TambourineMan 12/7/2018 02:57
But it lost 6 times.
jsaldea12 jsaldea12 12/7/2018 02:50
A super computer like alphazero makes a move at speed of light of 186,000 miles/sec. while human being, super grandmaster, thinks and makes move 100,000 times slower per second, more or less.!! Thus the time element is very disadvantageous to human GM. Suppose , we try to reduce the time gap to give GM a semblance of equality , say a supercomputer is given 5 minutes to make 40 moves while human GM is given 10 hours to make 40 moves. What do you think?.
badibadibadi badibadibadi 12/7/2018 02:11
It's not about Chess, Shogi and Go anymore.

Humanity is about to create the monster that will destroy it as was predicted long time ago (Matrix).

Also, AlphaZero could be applied to military purposes, imagine alphaZero using drones to kill as many people as possible instead of just chess pieces.

We're all going to be destroyed anytime soon.