The Elo rating system – correcting the expectancy tables

3/30/2011 – In recent years statistician Jeff Sonas has participated in FIDE meetings of "ratings experts" and received access to the historical material from the federation archives. After thorough analysis he has come to some remarkable new conclusions which he will share with our readers in a series of articles. The first gives us an excellent overview of how the rating system works. Very instructive.

Komodo 12 Komodo 12

In computer chess there is no getting past Komodo, a two-time ICGA Computer World Chess Champion. Find out how Komodo can take your game to the next level!

More...

The Elo rating system – correcting the expectancy tables

statistical evidence.

Man vs Machine – who is winning?
08.10.2003 – Every year computers are becoming stronger at chess, holding their own against the very strongest players. So very soon they will overtake their human counterparts. Right? Not necessarily, says statistician Jeff Sonas, who doesn't believe that computers will inevitably surpass the top humans. In a series of articles Jeff presents empirical evidence to support his claim.

Does Kasparov play 2800 Elo against a computer?
26.08.2003 – On August 24 the well-known statistician Jeff Sonas presented an article entitled "How strong are the top chess programs?" In it he looked at the performance of top programs against humans, and attempted to estimate an Elo rating on the basis of these games. One of the programs, Brutus, is the work of another statistician, Dr Chrilly Donninger, who replies to Jeff Sonas.

Computers vs computers and humans
24.08.2003 – The SSDF list ranks chess playing programs on the basis of 90,000 games. But these are games the computers played against each other. How does that correlate to playing strength against human beings? Statistician Jeff Sonas uses a number of recent tournaments to evaluate the true strength of the programs.

The Sonas Rating Formula – Better than Elo?
22.10.2002 – Every three months, FIDE publishes a list of chess ratings calculated by a formula that Professor Arpad Elo developed decades ago. This formula has served the chess world quite well for a long time. However, statistician Jeff Sonas believes that the time has come to make some significant changes to that formula. He presents his proposal in this milestone article.

The best of all possible world championships
14.04.2002 – FIDE have recently concluded a world championship cycle, the Einstein Group is running their own world championship, and Yasser Seirawan has proposed a "fresh start". Now statistician Jeff Sonas has analysed the relative merits of these three (and 13,000 other possible) systems to find out which are the most practical, effective, inclusive and unbiased. There are some suprises in store (the FIDE system is no. 12,671 on the list, Seirawan's proposal is no. 345). More.


Topics: ratings, Sonas
Discussion and Feedback Join the public discussion or submit your feedback to the editors


Discuss

Rules for reader comments

 
 

Not registered yet? Register

heister heister 3/4/2018 08:05
Thanks for this. I often try to explain the math behind the rating system - mostly generalized. This article brings some shape to it that I appreciate.
DanIGannon DanIGannon 3/30/2016 12:18
The 400 point rule alone may explain all (and certainly some of) the error, without the 83% adjustment. Ever time people scored above 92% against players rated at least 400 points below them, they became slightly overrated. Thus, as a population, the higher rated players were aritifially made overrated. You'd need an entirely new dataset, uncorrupted by the 400 point rule, before applying any further adjustment.
1