Winning starts with what you know
The new version 18 offers completely new possibilities for chess training and analysis: playing style analysis, search for strategic themes, access to 6 billion Lichess games, player preparation by matching Lichess games, download Chess.com games with built-in API, built-in cloud engine and much more.
In a few weeks Garry Kasparov will take on X3D Fritz in a high-profile man-machine chess match. It is very difficult to figure out who should be the statistical favorite, because computer ratings are somewhat different from human FIDE ratings.
Every two or three months, the Swedish Chess Computer Association (abbreviated SSDF in Swedish) publishes a rating list, estimating the strength of the top chess computers. The ratings are based upon thousands of games hosted by SSDF members, between commercially-available computer programs running on specific hardware. Different versions of the same program are treated separately. Thus the top seven in the July 2003 SSDF list were all various versions of Fritz or Shredder, running on the most powerful hardware used by SSDF members (256MB Athlon 1200 MHz).
The SSDF list is based mostly upon games between computers, and is intended to report the overall results from long computer-computer matches. However, the ratings are published on the same scale as the FIDE ratings for humans. This leads to an irresistible urge to compare computers' SSDF ratings against humans' FIDE ratings, and to speculate about who is strongest...
It must be disturbing to the top human players, watching the SSDF ratings creep slowly upwards, month after month after month. Four years ago saw the first 2600+ rating, a year ago the first 2750+ SSDF rating was achieved, and in July the 2800 barrier was cracked. I should remind you that Garry Kasparov and Vladimir Kramnik are the only humans to ever manage a 2800+ FIDE rating. To illustrate this visually, here is a graph showing how Kasparov's FIDE rating has compared against the SSDF #1 computer (usually Fritz, sometimes Shredder, occasionally other programs) over the past five or six years:
I will go into more details about the peaks and valleys of this graph in Part III, but for now I want you to look at it and get a general sense of what's going on. The red line is Garry Kasparov's rating over time, and the blue line is the rating of the top computers on the SSDF list. The blue line is creeping closer and closer to the red line. It seems just on the verge of crossing over. Does this mean that Kasparov and Kramnik will be humanity's last true World Champions? Is it only a matter of a few months until we see a computer which is undeniably stronger than the top players? Probably most of you believe that the answer to these questions is "Yes".
I have been spending a lot of time this year, investigating these questions.
Believe it or not, my current answer is "No". I don't believe that
computers will inevitably surpass the top humans. Even if it does happen, that
may still be many years in the future. And I believe that the empirical evidence
supports my claim.
There are two key things to measure, two key questions to answer:
Question #1: A decade ago, top grandmasters were undeniably stronger than chess computers. There was a large gap in strength, roughly 300 Elo points. In chess terms, if a top grandmaster had played 100 games against a top computer, the grandmaster would have won the match by a score of about 85-15 (roughly speaking). In the past ten years, computers have certainly reduced the gap. How large is the gap right now, and who is ahead?
Question #2: Who is improving faster, top grandmasters or chess computers? What can we say about how the situation will be different in one year, or ten years, or fifty years?
First of all, let me clarify that by the term "chess computers" I am referring to chess-playing software running on normal computers, as well as actual chess-specific hardware (with its own software) like Deep Blue or Brutus. I know that there is a difference between software and hardware, and a difference between chess engines and chess hardware, but it gets in the way of communication when I try to include so many words in my sentences. So I am just saying "chess computer" in an attempt to be succinct.
Okay, so let's tackle the two questions one at a time. The first one is easier: How large is the gap, if any, between today's top grandmasters and the top chess computers? The best way to address this is to look at the results from recent years. I went through all of the non-blitz historical games I could track down, where a computer program using state-of-the-art hardware had played against a human with a rating of 2700 or higher. I grouped those together into specific matches or tournaments, and added up (for each event) whether the computer won, lost, or drew against the 2700+ crowd. The results were quite interesting…
There is absolutely no evidence to suggest that chess computers have surpassed the top grandmasters. The top humans are holding their ground, battling the top computers to a long series of seven straight drawn matches. Neither side has managed to win one of these matches in the last five years! Humanity isn't winning, but it isn't losing either. Although computers obviously must be improving in recent years, the strongest humans seem to also be improving at about the same rate.
This leads us right in to the second (and much more difficult) question: Are chess computers improving faster than grandmasters? I will tackle this question in Part II, next week.
Jeff Sonas is a statistical chess analyst who has written dozens of articles since 1999 for several chess websites. He has invented a new rating system and used it to generate 150 years of historical chess ratings for thousands of players. You can explore these ratings on his Chessmetrics website. Jeff is also Chief Architect for Ninaza, providing web-based medical software for clinical trials. Previous articles: |