Computer predictions of Candidates after round five

3/17/2016 – Readers will no doubt recall the fascinating statistics produced on the Candidates with millions of simulations and weighted data, to get a feel as to the statistical favorites, the probable score needed to win, and more. Five rounds have passed, and you might wonder how the numbers will have changed. Here are the updated results of the computer simulations.

ChessBase 15 - Mega package ChessBase 15 - Mega package

Find the right combination! ChessBase 15 program + new Mega Database 2019 with 7.6 million games and more than 70,000 master analyses. Plus ChessBase Magazine (DVD + magazine) and CB Premium membership for 1 year!


Computer predictions of Candidates after round five

By James Jorasch and Chris Capobianco

As the number of remaining rounds dwindle, and with only one result in the minds of these competitors, we have been looking for more players outside the leaders to take more risks. This seemed to fit the actions in round five of Caruana who decided to add a little chaos to the opening phase by surprising Aronian with the Benoni -- which is rarely played at the elite level. Perhaps he is exploring more dynamic parts of his repertoire in order to generate more chances.

Karjakin is the primary benefactor from all of the draws this round. To find out what happens if this trend continues, we simulated millions of tournament runs in which the next four rounds were set to all draws.

(Click on image for high-res)

The result of this simulation is quite clear. Draws over the next four rounds are a big boost to Karjakin, increasing his winning chances from 28.0% to 42.7%. Aronian remains relatively level, dropping a bit from 26.8% to 25.7%. The rest of the pack sinks downward, with Caruana holding up best.

Yesterday saw Russia (Karjakin and Svidler) surpass the Americans (Caruana and Nakamura) in total chances to win the tournament. And while both teams gained today, the Russians continue to widen the gap.

(Click on image for high-res)

The expected winning score for the tournament drops a bit to 8.6. Our new score frequency chart illustrates the frequency of the most common final winning scores. We can now see clearly the rise and fall of possible final scores as tournament rounds are completed.

Note that a winning score of 9.0 started the tournament as a more likely winning score than 8.0, but now the two final scores are neck and neck. We ran a test simulation with all four games of Round 6 as draws to see what would happen. The results showed a score of 9.0 representing 223,000 final scores while 8.0 represented 262,000 final scores.

(Click on image for high-res)

The number of tournaments expected to be decided in a tiebreaker inches upward, reaching its highest level of 26.7%.The following chart shows how that 26.7% is broken down by tiebreak type. Note that the possibility of the tournament being decided by one or more game playoffs is vanishingly small - just 0.7%.

(Click on image for high-res)

With all draws there were no large moves in expected prize winnings. But Karjakin had the largest gain so far in the tournament without winning a game with a boost of €2,800.

(Click on image for high-res)

If you have questions about tournament statistics that we have not covered, please leave feedback.

About the authors
James Jorasch is the founder of Science House and an inventor named on more than 700 patents. He plays tournament chess, backgammon, Scrabble, and poker. He lives in Manhattan and is a member of the Marshall Chess Club.
Chris Capobianco is a software engineer at Google. He is a two-time finalist at the USA Memory Championships and has consulted on memory for finance, media, advertising and Fortune 500 companies such as Xerox and GE.

Discussion and Feedback Join the public discussion or submit your feedback to the editors


Rules for reader comments


Not registered yet? Register

perl2ruby perl2ruby 3/23/2016 09:47
maybe somebody needs to predict how much these predictions will deviate from reality. A more useful thing to do would be to just find a way to show the current standings which are harder to find than these inaccurate predictions.
suhas suhas 3/19/2016 03:14
i want round 6 predictions

chessdrummer chessdrummer 3/19/2016 05:10
Not sure of the relevance. The probability before the tournament is more rigorous. The results after five rounds would be obvious and based on the scores they have.
Queenslander Queenslander 3/19/2016 12:55
Interesting but why didn't they wait for the Rd 6 results just before the rest day? By the time most of us are reading these predictions they are already out of date. I'm presuming Aronian is the new (slight) favourite because he plays more dynamically than Karjakin and is more likely to win games.
Raymond Labelle Raymond Labelle 3/18/2016 05:18
To Resistance. "the previous prediction, before the start of the tournament, was pretty similar to their ELO ranking (before the start). If predictions were completely different from the current standings or ELO ranking, when the tournament had not started yet, we would complain about their inadequacy, too."

Good observation! Suggests that Elo rating was quite a heavy factor in the analysts' evaluation.

The Elo rating reflects the whole life of a player. For example, with an almost 3000 performance, Karjakin's rating would go up by 10 or 11 and he still would be the penultimate rated player. Even performance well above or well below a player's rating takes time and many games to fully reflect in the Elo rating, because it is based on the past of the player and has a stability which incorporates a large past.

This is why I think that taking the performance of the players in the different tournaments for, say the last year, would have given us a better estimation of their form immediately before the tournament than taking their rating during the last year.

The blatant underestimation of Karjakin (more or less 6% chances of winning) is an excellent example of this. K. played brilliantly in the World Cup recently, and even on average along the year in the different tournaments. He may have performed during last year better than many other candidates but his Elo rating is still the penultimate of the Candidates as a whole. A statistical comparison based on the performance rating for each tournament during the last year would have given a better comparison between the participants to this tournament than the Elo rating and would “probably” have given us a better estimation of their form before the Candidates tournament.
Resistance Resistance 3/18/2016 05:14
Round 5's prediction is pretty similar to Round 5's standings; the previous prediction, before the start of the tournament, was pretty similar to their ELO ranking (before the start). If predictions were compeltely different from the current standings or ELO ranking, when the tournament had not started yet, we would complain about their inadequacy, too. Since the future hasn't happened yet, we can't be sure of results till the time they actually happen, if they happen --because you can't see the sunrise before the sun rises. Yet, we can still simulate possible scenarios based on what has already happened. But, can you guess there's a third color, if you have only yellow and blue?

ulyssesganesh ulyssesganesh 3/18/2016 01:23
good entertainment stuff with these computers!
Mr TambourineMan Mr TambourineMan 3/18/2016 12:50
Maybe there's a genius out there that can discover a method that can calculate with 100 % certainty who will win but that the huge number of simulations take longer to perform the calculations than it takes play the tournament ...
chidoznn chidoznn 3/17/2016 11:44
Thank u
fons fons 3/17/2016 11:24
Most of these comments show that most people don't understand statistics or it's purpose.
Raymond Labelle Raymond Labelle 3/17/2016 07:36
The comparison between Russians and Americans both seen as collectives is driven more by primary nationalism than by the scientific spirit. As mentioned by others, each player should be seen as an individual. We could even have a long debate on whether primary nationalism is a weakness of the human mind. But this debate is one of values, not of science.

Of course, once we have a winner, national pride of the winner's fans could manifest itself, or even the national pride of the winner himself, but this has nothing to do with science.
X iLeon aka DMG X iLeon aka DMG 3/17/2016 06:14
Americans-Russians??? Really??? Are we back to the cold war??? What kinda stupid pseudo-scientific analysis is that?
Asnasium Asnasium 3/17/2016 12:03
I would never question a computer!
JFBOBBY JFBOBBY 3/17/2016 11:44
Would love to see one of Karyakin, Naka, Caruano, or Giri to challenge Carlsen.
Stupido Stupido 3/17/2016 11:37
Wait a minute, the computers say that as Karjakin leads, so his chances to win the tourney have greatly improved. Amazing!
euldulle euldulle 3/17/2016 10:25
I am confident that the very same results could be obtained from a 4 function calculator and some basic statistical derivations based on players' elo and current standings. Just like the initial 3/9/2016 paper spent huge amounts of cpu time to finally confirm that elo is a statistically sound estimation of players strength.
ale1983 ale1983 3/17/2016 09:48
A fascinating study. I find it really interesting to see how it develops & unfolds as the tournament progresses.
Thank you James and Chris!
hansj hansj 3/17/2016 09:12
It does not make any sense to compare Russians to Americans. This is not a team tournament. Every player is playing for himself, and not for his country.
Beanie Beanie 3/17/2016 08:37
This of course completely refuted the critics who said, before the match, that Vaughan had only one chance in a million.
Beanie Beanie 3/17/2016 08:30
In one of the simulations Stan Vaughan was the wildcard instead of Levon Aronian. This was to reunify the World Championship between FIDE and WCF. The computer crunched the numbers and found that over one million simulations Vaughan won 3 times. Then he was paired against Carlsen in a reunification match. Out of one million matches Vaughan emerged victorious twice.
Omoplata Omoplata 3/17/2016 08:28
These mid-tournament simulations are pointless. It was the same with the 2013 candidates statistics reports; the computers essentially have knee-jerk reactions to any wins and their predictions are then shown to be wrong on more or less a round by round basis.
Hawkman Hawkman 3/17/2016 08:24
The last one had Karjakin with the second to worst chance of winning.
Beanie Beanie 3/17/2016 07:58
Yes it is a wonderful consolation to disappointed fans! "My favourite didn't win in Moscow, but that's ok, he DID win thousands of simulations."
HubertKnott HubertKnott 3/17/2016 07:44
Of course I meant "can." But statistical analyses like this article do serve a purpose---
HubertKnott HubertKnott 3/17/2016 07:39
Beanie, I cam only say LOL
Beanie Beanie 3/17/2016 07:33
I don't think this tournament is going to be played millions of times. Just once, in Moscow, in March 2016.