Thompson: Leave the K-factor alone!

by ChessBase
5/7/2009 – The debate on whether to increase the rate of change of the Elo list continues. Today we received an interesting letter from Ken Thompson, the father of Unix and C, and a pioneer of computer chess. Ken believes that the current rating system isn't broken and that the status quo is better than change. If anything the ratings should be published more often – every day if possible. Food for thought.

ChessBase 17 - Mega package - Edition 2024 ChessBase 17 - Mega package - Edition 2024

It is the program of choice for anyone who loves the game and wants to know more about it. Start your personal success story with ChessBase and enjoy the game even more.

More...

Leave the K-factor alone!

By Ken Thompson, California, USA

First of all: GM Bartlomiej Macieja's argument is wrong because he confuses two variables: 1) the frequency that FIDE puts out lists and 2) how many games the mythical 2500 GM plays between lists. If FIDE puts out twice as many lists it has the same effect as the GM playing twice as many games – e.g. the same number of games between lists. He is arguing that because FIDE is publishing more, there is a difference, and so K should be increased. Another way to look at it is that if FIDE kept the same schedule and the GM played proportionally fewer games, then K should increase. The inescapable conclusion is that if FIDE decides to publish every 15 minutes, then K should be 1000.

The real problem here is that the GM's rating is "frozen" between lists. This gives rise to the situation of Macieja's argument. But more importantly, it gives wonderful opportunity for any player who finds himself rated higher or lower on a rating list than his playing strength. He has a full rating cycle to either get or give undeserved points – thus corrupting the whole system. The culprit here is the FIDE practice of freezing the ratings.

Now to the K-factor. The only reason to raise the K-factor is to allow the ratings of rising stars to more quickly reflect their strength. This is the "cherry picking" done by Sonas in choosing Bu Xiangzhi as an example. A player whose strength is falling will show the inverse trend. Karpov's strength in the few years after he lost the championship is such an example. FIDE kept him in the top ten long after his strength was top 50.

But raising the K-factor will increase the average difference between stable players strength and rating. It will increase the variance in the relationship between rating and strength. This is shown easily by taking the extremes. A very high K-factor will make every player who won his last game before the rating list be over-rated and vice versa.

Lowering the K-factor will take more games for a rising (or falling) player to get accurately rated. Since players are usually in transition for shorter periods of time over their entire careers, this does not seem important. Many rating schemes will dump points into a rising star so that he will not adversely affect his opponents.

So what should be done?

  1. Leave the K-factor alone. It obviously isn't broken. It may not be perfect, but the status quo is better than change. Changing the K-factor will have huge negative implications on inflation/deflation of the entire rating system. Ideally, the K-factor should be set to provide inertia over the variances of average human play. K=10 is not far off, but K=24 is way too big. K=15 would probably be better, but not enough to risk inflation.

  2. If anything is broken, it is FIDE's freezing of the ratings between lists. Publishing the list more often will help (how about publishing it every day?), but that should have no effect on the K factor.


Ken Thompson

Kenneth Lane Thompson was the principal inventor of UNIX. Even today, more than 35 years later, UNIX and its descendants are still widely regarded as the best computer operating systems to have ever been developed. He was born in 1943 in New Orleans, Louisiana and spent his childhood as what he called a navy brat. He received his Bachelor's and Master's degrees, both in electrical engineering, from the University of California at Berkeley (UCB). Soon thereafter, in 1966, he was hired by Bell Labs, the research and development arm of AT&T, the former U.S. telecommunications monopoly.

1969 was that magic year in which mankind first went to the moon and Thompson wrote the game called Space Travel. He decided to write his own operating system, in large part because he wanted a decent system on which to run his game on the PDP-7. He accomplished this in little more than a month, spending one week each writing the kernel (i.e., core of the operating system), the shell (which is used to read and run commands that are typed into the computer), an editor and an assembler (a program to convert source code into machine code that can be directly understood by a computer's CPU). He wrote all of this in PDP-7 assembly language.

The PDP-7 on which he developed and first ran his operating system had an 18-bit word length (in contrast to the now nearly universal eight bit word length) and only four kilobytes of memory. This extremely small memory was undoubtedly a major factor in Thompson's keeping his operating system extremely small and providing it with an elegant simplicity that has, in turn, played an important role in the great success of it and its various descendants (including Linux).

The following year Thompson wrote the B programming language, which started out as an effort to improve the existing BCPL (basic combined programming language) language. The most important thing about B is that it became a precursor to the C language, the original version of which was completed by Dennis Ritchie in 1972. C soon became one of the world's most powerful and commonly used programming languages and remains so even today. Ritchie, who joined Bell Labs the year after Thompson, also played a major role in the early development of UNIX. [Source: Lininfo]

In 1979 Ken and a colleague at the Bell Laboratories decided to build a special purpose machine to play chess, using many hundreds of chips, worth about 20,000 dollars. "Belle" was able to search at about 180,000 positions per second (the super-computers at the time were doing 5,000 positions) and go down eight to nine ply in tournament games, which enabled it to play in the master category. It won the world computer chess championship and all other computer tournaments from 1980 to 1983, until it was superseded by giant Cray X-MPs costing a thousand times more.


Chess computers and endgames: Ken Thompson with Garry Kasparov

Ken is also one of the pioneers of endgame databases. In the 80s he began to generate and store all legal endgame positions with four and five pieces on the board. A typical five-piece ending, like king and two bishops vs king and knight, contains 121 million positions. With a pawn, which is asymmetric in its movements, the number rises to 335 million. Thompson wrote programs that generated all legal positions and worked out every forcing line that is possible in each endgame. He also compressed the resulting data in a way that allowed one to store about 20 endgames on a standard CD-ROM.

Coming soon: John Nunn's wrap-up reply to the K-factor articles

References


Reports about chess: tournaments, championships, portraits, interviews, World Championships, product launches and more.

Discuss

Rules for reader comments

 
 

Not registered yet? Register