Just over a year ago, the Alpha Zero chess program made headlines by sensationally defeating Stockfish 8. In the published games the playing style was really impressive, reminiscent of Anderssen and Morphy with positional sacrifices of pawns and pieces. Many chess aficionados would like to buy Alpha Zero if it were available and could be run on conventional computers. While that's not an option, fortunately, Komodo MCTS is a chess program available for an affordable price that uses similar techniques as Alpha Zero and can be installed on consumer PCs. This version has made great progress since its first release, as can be seen from two remarkable games played in December 2018.
In the first round of Division 2 of the current TCEC 14 tournament, which is regarded by many as the actual, albeit unofficial World Chess Computer Championship, Komodo MCTS defeated Nirvana 2.4 in a spectacular way. Playing with the white pieces against a Sicilian Defence, Komodo sacrificed a pawn, then the exchange and finally a bishop, but Black was already lost after grabbing the pawn:
In this game, the open-minded approach of the program is particularly impressive as a result of the Monte Carlo tree search used by Komodo MCTS.
The Sicilian Tajmanov-Scheveningen
The Sicilian has been known for decades as the most reliable way for Black to obtain an unbalanced but good position. Among the most popular Sicilians at the top level the two that certainly stand out are the Najdorf and the Paulsen.
The Art of the Exchange Sacrifice
People in Greenland can differentiate between 36 kinds of snow; the Swedish International Master Ari Ziegler will teach you to differentiate between 20 kinds of exchange sacrifice. His systematic approach to exchange sacrifices will rapidly enhance your understanding of one of the most exciting and difficult aspects of chess. Your widened horizon will help you to be able to follow the games of top GMs better and also to see more options in your own positions, which in turn will improve your chess results.
In contrast to the conventional alpha-beta pruning search algorithm, no chess knowledge is required in this procedure, apart from the rules of the game and the possible outcomes of a chess game: win, defeat or draw. Roughly speaking, this is what happens: In a given position, a bunch of possible moves are tried out by the program playing against itself until a result is obtained: 1, 0 or 1/2. What the move that achieves the best result looks like does not matter. Whether the move abandons a pawn, a piece or the queen, allows a double or triple pawn, or violates any iron principle formulated by great chess teachers, is irrelevant as long as it is successful. You could call this an open-minded approach.
By playing out a large number of games to the end, conclusions are drawn about the best moves in a particular position. The program deals with the most successful moves in more detail than with the unsuccessful ones. The result is a "search tree", which at the end allows the choice of a move which is purely based on the best performance.
This method is particularly effective combined with "neural networks". It is well known that Alpha Zero played countless games against itself for nine hours and learned from the results. However, enormous computing capacities were made available for this, because otherwise, nine hours would hardly have been enough. Afterwards Alpha Zero was able to beat Stockfish 8, one of the best chess programs in the world. In a nutshell, this means: AI (artificial intelligence) learns by trial and error purely from its experience.
Master Class Vol.5: Emanuel Lasker
The name Emanuel Lasker will always be linked with his incredible 27 years reign on the throne of world chess. In 1894, at the age of 25, he had already won the world title from Wilhelm Steinitz and his record number of years on the throne did not end till 1921 when Lasker had to accept the superiority of Jose Raul Capablanca. But not only had the only German world champion so far seen off all challengers for many years, he had also won the greatest tournaments of his age, sometimes with an enormous lead. The fascinating question is, how did he manage that?
In this context, I would like to mention the discussion between world champion Dr Emanuel Lasker and his archrival Dr Siegbert Tarrasch about the "aesthetic" value of a move.
Just before his World Chess Championship match against Tarrasch in 1908 (pictured via Wikimedia Commons), Lasker made these remarks:
"Dr Tarrasch is a thinker who loves deep and complex considerations. He is willing to acknowledge the effectiveness and usefulness of a move if he considers it both beautiful and theoretically correct. But I accept this kind of beauty only if and when it is appropriate. He admires an idea for its depth, I admire it for its effectiveness. My opponent believes in beauty, I believe in strength. I think that a move is also beautiful in that it is powerful." (a)
From this, it becomes clear that world champion Dr Emanuel Lasker's considerations were guided by the attempt to achieve the best results. This is exactly the approach Komodo MCTS also pursues.
In an interview published in December 2018, Komodo developer IGM Larry Kaufman stated that Komodo MCTS is making ten times faster progress than the "normal" Komodo version, which is known to rely on the proven alpha-beta pruning. (b) In a very simplified way, the following can be said about this:
The alpha-beta pruning is about excluding as many moves and variations as possible because their results are obviously unfavourable. The alpha value indicates how much player "A" will reach at least if a certain move is played. The beta value, on the other hand, describes the maximum amount that player "B" will achieve by his best countermove. So if I check a move that wins a knight and find out that the opponent gets at most one pawn by playing the best continuation, I could assign a value of +2.00 to this move, if all other conditions are the same. Now I look at another move in the same starting position and notice that although I can capture an adverse rook, my opponent will get at least my queen and three pawns by proper counterplay so that the final result is -7.00.
In this way, all possible moves are assigned certain values. Those moves with the best values come right to the top and are immediately examined closer and deeper, while those with the worst values come last and usually don't give better results even at greater search depths. If nevertheless, this case should occur, then immediately a re-evaluation takes place, and a move evaluated so far with -2.55, for example, suddenly moves with a plus value to the first, second or third place.
In contrast to the Monte Carlo method, however, the positions are not played out (unless a mate is found), but are followed up to a search depth that increases over time and the resulting positions are evaluated according precisely to previously defined criteria. Here, not only the values of the pieces and pawns are important, but also positional factors such as king safety, pawn weaknesses, open lines, space advantage, control of the centre, etc.
This ingenious method has been refined more and more in recent years and has proven to be extremely successful. Nevertheless, it is possible that the scope for increasing playing strength may be exhausted at some point. Think of the latest Stockfish version 10, in which the programmers and their countless volunteer testers invested about ten months of development work, to gain just 15 ELO points compared to its predecessor version Stockfish 9 (64-bit, 1CPU, according to CCRL Elo list 40/40, dated January, 12th, 2019).
Houdini 6 continues where its predecessor left off, and adds solid 60 Elo points to this formidable engine, once again making Houdini the strongest chess program currently available on the market.
Let's compare the three previous versions: Stockfish 7 was released on January, 2nd, 2016 and has an Elo rating of 3246. Stockfish 8 from November, 2nd, 2016, just 10 months later, jumped to 3301, a plus of 55 Elo points. This is equivalent to an increase of 5.5 points per month of development. Stockfish 9 from January, 31st, 2018, came out about 15 months later and reached a rating of 3367, 66 points more than the previous version, which means 4.4 Elo increase per month of development. Then ten months went by before the latest version Stockfish 10 was released on November, 30th, 2018. This new version has an Elo rating of 3382, only 15 points more than Stockfish 9. This shows that the curve suddenly flattens out with only a 1.5 point increase per month.
These observations might indicate that the development potential of the chess programs with the conventional alpha-beta pruning search algorithm has, in fact, already been largely exhausted. But of course one should not judge hastily. So let's wait for the next generation of popular programs. Only then we'll know whether this trend is consolidating or not. Anyway, from the above it becomes clear why the Komodo developers Larry Kaufman and Mark Lefler believe that their MCTS version will make the greatest progress. In the current Top Chess Engines Championship (TCEC 14) Komodo MCTS could already sensationally work its way into the "Premier League", and thus into the class of the best eight chess programs.
It is remarkable that the new program with the Monte Carlo tree search was already able to win some games against a giant like Stockfish 10. The following game was played on December, 21st, 2018 with a time control of 40 minutes per 40 moves each. Both programs could use four processors. By a surprising pawn sacrifice, Komodo MCTS succeeded in positionally outmanoeuvring its formidable opponent. It's kind of weird that after White's 26th move, all three knights are placed on the edges of the board:
Komodo MCTS is available together with the conventional Komodo version in the ChessBase shop. You can advantageously use the Monte Carlo tree search without abandoning the time-proven alpha-beta pruning. By using the two search algorithms alternately, you will get deeper insights into complicated positions which is useful both for the analysis of games and for the computer-aided preparation of opening variations.
(a) Quoted according to Harold C. Schonberg, "Die Grossmeister des Schach", Frankfurt 1976, p. 119