Alternative method for selecting world championship candidates

1/23/2006 – Can you imagine Mark Paragua playing in the FIDE candidates matches? Or Fischer coming out and losing four games to qualify. These are some of the quirks of the current system. While the world championship format of FIDE is probably the best under the circumstances, the selection process could be reformed. Chess statistician Jeff Sonas has a proposal.

ChessBase 14 Download ChessBase 14 Download

Everyone uses ChessBase, from the World Champion to the amateur next door. Start your personal success story with ChessBase 14 and enjoy your chess even more!


Along with the ChessBase 14 program you can access the Live Database of 8 million games, and receive three months of free ChesssBase Account Premium membership and all of our online apps! Have a look today!

More...

Alternative method to select the sixteen players
for the FIDE Candidates matches

By Jeff Sonas

The latest incarnation of the FIDE world championship cycle has taken another step forward. With the completion of the FIDE World Cup in Khanty-Mansiysk in December, coupled with the long-expected decisions by Garry Kasparov and Vladimir Kramnik to decline participation in the FIDE cycle, the final pairings for the first round of eight Candidates matches were announced by FIDE in early January. Those matches will be six games in length.

Out of the sixteen players involved in the matches, five of them qualified by rating, ten qualified thanks to a high finish in the recent FIDE World Cup, and former FIDE champion Rustam Kasimjanov was given an automatic spot in the Candidates matches, completing the set of sixteen players. Out of those eight matches, the eight winners will advance to one more round of six-game matches, and the four winners will join the top four finishers (Veselin Topalov, Viswanathan Anand, Peter Svidler, and Alexander Morozevich) from last year’s San Luis tournament, in a double-round robin FIDE World Championship tournament sometime in 2007.

I believe that most fans prefer the final stage of the world championship to be the traditional match between the defending champion and one challenger (who was the winner of a Candidates cycle). So the idea of the World Championship being a round-robin tournament is distasteful to many, but the cold reality is that the round-robin tournament may be the most feasible solution, financially speaking. And Alexei Shirov, recently elected to the board of the Association of Chess Professionals, indicated that his colleagues agree that it is not a bad format. This is not yet the place for that full debate. But even if the World Championship must be an eight-player tournament for whatever reason, I still think that the method used by FIDE to select its candidates could be greatly improved upon.

For one thing, there was way too much emphasis on the results of the FIDE World Cup, which is itself a flawed event, and not enough emphasis on the overall recent results from top players. Instead of the single-elimination World Cup, a double-elimination World Cup, such as I have described in the past, would be just as easy to administer, would mix around the players more uniformly and fairly, and would give deserving top players like Vassily Ivanchuk a fighting chance to atone for one ill-advised loss. I do think that 4 or 5 spots awarded from the World Cup would have been more reasonable than the 10 we actually got.

In addition, FIDE used extremely old ratings to determine the five players who qualified by rating. Although the final pairings for the matches were only just announced in January 2006, the five players (Peter Leko, Michael Adams, Judit Polgar, Alexei Shirov, and Etienne Bacrot) were selected on the basis of their average rating in the two FIDE rating lists of July 2004 and January 2005, with complete disregard for the four more-recent FIDE rating lists that have been released since January 2005.

Alexei Shirov recently suggested an improvement of using a more current rating list as the source for any rating-based selection of candidates. I agree that such a change would be an improvement. However, I would like to suggest a more significant and radical improvement. Instead of using a rating list at all for the selection of candidates, I think it would be better to use a performance rating measure, one which only considers games played over the previous 12 months (or possibly 24 months).

For one thing, there are flaws in the FIDE rating list itself. As I have said many times in the past, the Elo formula used for calculating the FIDE ratings is way too conservative, and much too slow to catch up to the results of rapidly-improving or rapidly-declining players. In addition, the Elo formula introduces a significant bias against players that tend to outrate their opponents by around 100-200 rating points. This means that the players who only play against their elite counterparts are given a relative advantage by the Elo formula. For players like Vassily Ivanchuk or Alexei Shirov who play a wider range of opponents and thus do outrate their opposition by an average of 100-200 points, their FIDE ratings are going to somewhat lower than they deserve, because of this unintentional bias.

But even with an improved FIDE rating formula, it would still seem unfair to use the FIDE rating list, because the players who are already highly rated have a huge advantage. They don’t even have to play much, or at all; they can just hang on to their high ratings and risk nothing. Even retired players don’t have to start over! Gata Kamsky retired five years ago with a 2717 rating (based largely on his results during the mid-1990’s), and Kamsky retained that inactive 2717 rating indefinitely until he returned to professional chess a year ago. If he had begun his comeback a few months earlier, and had enjoyed slightly more success in his first few events, then Kamsky (and not Judit Polgar) would have qualified for the eighth spot in the world championship tournament last year in San Luis, and then again for the 2006 Candidates matches, based mostly on his results from a decade ago!

As a more extreme example, Bobby Fischer last played a FIDE-rated game in 1972. His inactive FIDE rating has stayed at 2780 for more than three decades. He could have come out of retirement, and resigned four rated games against extremely weak opposition, and having played four games he would then have his rating converted over to “active”. And that rating would easily be high enough to allow him to qualify based on his placement on the rating list, despite scoring no points in any rated games over the past three decades!

Surely it would be better if all players started each cycle from a more level playing field. And that is why I think it would be more sporting, and more interesting, if qualification were based, not upon the rating list, but upon a performance rating measure covering a specific time range. For instance, we could calculate each player’s performance across a particular calendar year, and allow some number of top players from each yearly performance list to qualify for the Candidates matches. So perhaps it could have been five players from the 2004 performance list, five players from the 2005 performance list, and five players from the 2005 World Cup, rather than the way FIDE did it. This would ensure that qualification is based only upon the players’ success during a single year, and not automatically “inherited” from success in previous years via a stagnant rating system.

However, you have to be careful when using performance ratings, because there are some potential problems. Most importantly, a raw performance rating does not reward activity; it depends only upon your percentage score and the strength of your opponents, whether that is across 2 games or 200 games. For instance, the top-six list for overall performance rating during 2005 (minimum five games played, using games from the TWIC archives) would look like this:

#1

Garry Kasparov

2874 performance: 8/12 (67%) vs 2733 opposition

#2

Veselin Topalov

2825 performance: 38.5/61 (63%) vs 2714 opposition

#3

Viswanathan Anand 

2797 performance: 40/66 (61%) vs 2707 opposition

#4

Mark Paragua

2796 performance: 4/6 (67%) vs 2655 opposition

#5

Levon Aronian

2771 performance: 95/136 (70%) vs 2602 opposition

#6

Sergej Djachkov

2762 performance: 3.5/5 (70%) vs 2592 opposition

Some of those names clearly belong so high on the list, but others probably don’t. And even if you insist on more games played, you can still get players who build up massive performance ratings with a very high percentage score against relatively weak opponents. For instance, #17 on this list would be Vladimir Kostic, with a 91% score in 11 games against 2375-rated opposition. Such a list could easily lend itself to artificial manipulation as well, so it would be better if games played against strong opposition were somehow more beneficial. Or perhaps only certain certified events would be included for consideration in this list.

An ideal performance measure would reward activity, and would reward players facing strong opponents, and could be limited to results covering a specific time frame (unlike the FIDE rating list) or even just a specific set of events. Where could we find such a performance measure? Well, actually, this performance measure already exists! I developed it a year ago for use on my Chessmetrics website. I derived the simple formula objectively, to optimize the ability of the ratings to predict future results, but it agrees incredibly well with subjective measures such as the annual Chess Oscar (determined by voting members of the chess community).

For instance, I have calculated annual performance scores for each of the past 26 years that the Chess Oscar was awarded, and in 24 of the 26 years, the player with the highest performance score for the year, was also awarded the Chess Oscar! And the other two times (1995 and 1997), the player with the highest performance score finished second in the Chess Oscar voting, while the player with the second-best performance score finished first in the Chess Oscar voting. The annual performance score matches the subjective impression of chess fans about who had the best year, much more effectively than does something like the FIDE rating list, which as I’ve said is “tainted” by too much emphasis on results from prior years.

In the same way that so many other sports start their “seasons” completely fresh, we could start each year from scratch and maintain a running total, over the course of the year, of who has had the greatest success during the year. Rather than being handicapped by a lethargic rating system, anyone with excellent results during the year could jump right to the top of the list.

To illustrate what this performance score would look like, I have calculated the scores across the year 2005, and I have taken several “snapshots” of the top-twelve list over the course of the year, so you can see how the list would have shifted around as the year progressed. If the list had been used to award Candidates spots, you can be sure that there would have been great interest in the list, and emphasis on posting impressive results during the year, not just in one flawed tournament but in all events during the year. And what could be better than a system where the top players are encouraged to play frequently against strong opponents, and nobody can rest inactive on their high-rating laurels since qualification spots are no longer awarded based upon ratings?

Here is a summary of the 2005 performance list, with eight snapshots of the top-twelve at various points during the year, followed by the overall top-50 list as of the end of the year.

January 30th (just after Corus Wijk aan Zee): At the close of the Corus Wijk aan Zee tournament, the overall list is dominated by the top finishers in the Corus A event. The formula places a special emphasis on the strength of opposition, more so than the traditional raw performance rating calculation does. Even the players who scored 50% in the Corus A are in the top-ten.

#1

Leko,P

2784 score: 8.5/13 (65%) vs 2719

#2

Anand,V

2760 score: 8/13 (62%) vs 2716

#3

Topalov,V

2741 score: 7.5/13 (58%) vs 2718

#4

Grischuk,A

2723 score: 7/13 (54%) vs 2722

#5

Polgar,Ju

2721 score: 7/13 (54%) vs 2720

#6

Adams,Mi

2721 score: 7/13 (54%) vs 2719

#7

Kramnik,V

2720 score: 7/13 (54%) vs 2718

#8

Bruzon,L

2705 score: 6.5/13 (50%) vs 2726

#9

Van Wely,L

2703 score: 6.5/13 (50%) vs 2724

#10

Ponomariov,R

2702 score: 6.5/13 (50%) vs 2722

#11

Georgiev,Ki

2698 score: 7.5/9 (83%) vs 2541

#12

Karjakin,Sergey

2693 score: 9.5/13 (73%) vs 2562


February 22nd (just before Linares): Going into the Linares tournament, there is no sign of Garry Kasparov on the list. Despite his #1 rating on the FIDE list, he has not yet played any rated games during 2005, so he derives no benefit from his earlier successes. He must earn his spot on the list each year.

#1

Leko,P

2784 score: 8.5/13 (65%) vs 2719

#2

Anand,V

2763 score: 9.5/15 (63%) vs 2697

#3

Topalov,V

2741 score: 7.5/13 (58%) vs 2718

#4

Ivanchuk,V

2735 score: 7/9 (78%) vs 2619

#5

Kharlov,A

2726 score: 5.5/7 (79%) vs 2633

#6

Adams,Mi

2725 score: 9/16 (56%) vs 2696

#7

Grischuk,A

2723 score: 7/13 (54%) vs 2722

#8

Polgar,Ju

2721 score: 7/13 (54%) vs 2720

#9

Kramnik,V

2720 score: 7/13 (54%) vs 2718

#10

Aronian,L

2718 score: 15/22 (68%) vs 2587

#11

Sutovsky,E

2717 score: 13/18 (72%) vs 2571

#12

Bruzon,L

2705 score: 6.5/13 (50%) vs 2726


March 10th (just after Linares): Upon the close of Linares, co-winners Topalov and Kasparov are now at the top of the annual list. Although Kasparov’s raw performance rating is higher, Topalov still is given a significantly higher performance measure because he has played more than twice as many games as Kasparov, and the formula rewards him for his more statistically-significant results.

#1

Topalov,V

2811 score: 15.5/25 (62%) vs 2729

#2

Kasparov,G

2797 score: 8/12 (67%) vs 2733

#3

Leko,P

2786 score: 14.5/25 (58%) vs 2730

#4

Anand,V

2783 score: 16/27 (59%) vs 2714

#5

Bacrot,E

2761 score: 13.5/20 (68%) vs 2646

#6

Grischuk,A

2749 score: 12.5/22 (57%) vs 2704

#7

Ivanchuk,V

2738 score: 7.5/10 (75%) vs 2628

#8

Bologan,V

2737 score: 13/20 (65%) vs 2637

#9

Adams,Mi

2736 score: 14.5/28 (52%) vs 2717

#10

Sutovsky,E

2732 score: 14/19 (74%) vs 2573

#11

Polgar,Ju

2721 score: 7/13 (54%) vs 2720

#12

Kramnik,V

2720 score: 7/13 (54%) vs 2718


May 22nd (just after Sofia Mtel Masters): Topalov wins another tournament and lengthens his lead on Kasparov, whose inactivity is beginning to take its toll on his placement despite his incredible raw performance rating of 2874. The list is still mostly dominated by players who play many games against strong opponents.

#1

Topalov,V

2835 score: 22/35 (63%) vs 2732

#2

Kasparov,G

2797 score: 8/12 (67%) vs 2733

#3

Ivanchuk,V

2794 score: 21.5/30 (72%) vs 2628

#4

Anand,V

2787 score: 22.5/39 (58%) vs 2716

#5

Leko,P

2786 score: 14.5/25 (58%) vs 2730

#6

Sutovsky,E

2765 score: 21.5/30 (72%) vs 2596

#7

Grischuk,A

2761 score: 23/38 (61%) vs 2667

#8

Aronian,L

2755 score: 27.5/39 (71%) vs 2583

#9

Svidler,P

2750 score: 27/45 (60%) vs 2653

#10

Tiviakov,S

2749 score: 16.5/18 (92%) vs 2471

#11

Polgar,Ju

2747 score: 12/23 (52%) vs 2733

#12

Bacrot,E

2740 score: 18.5/28 (66%) vs 2615


September 26th (just before San Luis): Topalov finally stumbles in a weaker event and Ivanchuk takes over first place. However, based on performance during 2005 only, Topalov has performed significantly better than Anand and on this basis should probably be a slight favorite going into San Luis, with Svidler and Leko close together at #3 and #4 among San Luis participants.

#1

Ivanchuk,V

2818 score: 66.5/91 (73%) vs 2603

#2

Topalov,V

2808 score: 28.5/47 (61%) vs 2709

#3

Kasparov,G

2797 score: 8/12 (67%) vs 2733

#4

Anand,V

2787 score: 22.5/39 (58%) vs 2716

#5

Aronian,L

2785 score: 69.5/100 (70%) vs 2595

#6

Dreev,A

2766 score: 67.5/103 (66%) vs 2608

#7

Leko,P

2766 score: 18.5/34 (54%) vs 2723

#8

Svidler,P

2761 score: 39/65 (60%) vs 2656

#9

Lautier,J

2761 score: 37/49 (76%) vs 2541

#10

Jakovenko,D

2760 score: 42/58 (72%) vs 2559

#11

Gelfand,B

2756 score: 32.5/54 (60%) vs 2654

#12

Akopian,Vl

2756 score: 34/49 (69%) vs 2584


November 25th (just before FIDE World Cup): It appears that Topalov and Anand will finish at #1 and #2 for the year, unless Ivanchuk or Aronian or Svidler have an incredible finish in the final weeks. If you imagine that 5-10 Candidates spots are being handed out on the basis of final placement on this list, you can imagine that players such as Leko could ill-afford to stand pat and play no more games for the year, with so many players breathing down his neck for the top placements.

#1

Topalov,V

2839 score: 38.5/61 (63%) vs 2714

#2

Anand,V

2813 score: 38.5/64 (60%) vs 2709

#3

Kasparov,G

2797 score: 8/12 (67%) vs 2733

#4

Ivanchuk,V

2796 score: 83/121 (69%) vs 2611

#5

Aronian,L

2791 score: 82.5/120 (69%) vs 2605

#6

Svidler,P

2790 score: 52.5/86 (61%) vs 2672

#7

Leko,P

2762 score: 25/48 (52%) vs 2726

#8

Lautier,J

2761 score: 37/49 (76%) vs 2541

#9

Jakovenko,D

2760 score: 42/58 (72%) vs 2559

#10

Milov,V

2758 score: 59.5/83 (72%) vs 2554

#11

Gelfand,B

2754 score: 36/61 (59%) vs 2658

#12

Akopian,Vl

2753 score: 38/56 (68%) vs 2589


December 26th (Russian Superfinal, Pamplona in progress): With Aronian vaulting into the #3 spot thanks to his World Cup win, attention shifts to further down on the list. Dmitry Jakovenko, in the top ten for the past few months of this list, has made a late run and with his +3 score through seven rounds at the Russian Championship Superfinal, is now ranked #7 for the year among all players. Ruslan Ponomariov is at +1 after four rounds at Pamplona and has moved up to the #12 spot.

#1

Topalov,V

2839 score: 38.5/61 (63%) vs 2714

#2

Anand,V

2815 score: 40/66 (61%) vs 2707

#3

Aronian,L

2799 score: 95/136 (70%) vs 2602

#4

Kasparov,G

2797 score: 8/12 (67%) vs 2733

#5

Svidler,P

2787 score: 58/95 (61%) vs 2668

#6

Ivanchuk,V

2782 score: 86.5/127 (68%) vs 2600

#7

Jakovenko,D

2766 score: 49/69 (71%) vs 2572

#8

Leko,P

2762 score: 25/48 (52%) vs 2726

#9

Gelfand,B

2761 score: 47/77 (61%) vs 2645

#10

Grischuk,A

2760 score: 56.5/88 (64%) vs 2616

#11

Radjabov,T

2754 score: 60.5/90 (67%) vs 2584

#12

Ponomariov,R

2750 score: 43/69 (62%) vs 2625


December 28th (two rounds left at Russian Championship, one at Pamplona): In just two days, everyone from #5 down through #12 has shifted places, with Ponomariov winning two straight at Pamplona and both Jakovenko and Svidler only managing half a point in their two games at the Russian Championship Superfinal. What a finish!

#1

Topalov,V

2839 score: 38.5/61 (63%) vs 2714

#2

Anand,V

2815 score: 40/66 (61%) vs 2707

#3

Aronian,L

2799 score: 95/136 (70%) vs 2602

#4

Kasparov,G

2797 score: 8/12 (67%) vs 2733

#5

Ivanchuk,V

2782 score: 86.5/127 (68%) vs 2600

#6

Svidler,P

2782 score: 58.5/97 (60%) vs 2668

#7

Leko,P

2762 score: 25/48 (52%) vs 2726

#8

Gelfand,B

2761 score: 47/77 (61%) vs 2645

#9

Grischuk,A

2760 score: 56.5/88 (64%) vs 2616

#10

Jakovenko,D

2760 score: 49.5/71 (70%) vs 2575

#11

Ponomariov,R

2758 score: 45/71 (63%) vs 2624

#12

Radjabov,T

2754 score: 60.5/90 (67%) vs 2584


End of 2005: At the end of the year, imagine that the Candidates are selected from this list rather than from the ancient FIDE lists or even the current FIDE list. Instead of Peter Leko, Michael Adams, Judit Polgar, Alexei Shirov, and Etienne Bacrot, the five selections (based ONLY upon their 2005 performance) would be Levon Aronian, Vassily Ivanchuk, Peter Leko, Boris Gelfand, and Alexander Grischuk.

#1

Topalov,V

2839 score: 38.5/61 (63%) vs 2714

#2

Anand,V

2815 score: 40/66 (61%) vs 2707

#3

Aronian,L

2799 score: 95/136 (70%) vs 2602

#4

Kasparov,G

2797 score: 8/12 (67%) vs 2733

#5

Ivanchuk,V

2782 score: 86.5/127 (68%) vs 2600

#6

Svidler,P

2781 score: 59.5/99 (60%) vs 2668

#7

Leko,P

2762 score: 25/48 (52%) vs 2726

#8

Gelfand,B

2761 score: 47/77 (61%) vs 2645

#9

Grischuk,A

2760 score: 56.5/88 (64%) vs 2616

#10

Jakovenko,D

2758 score: 50.5/73 (69%) vs 2577

#11

Ponomariov,R

2757 score: 45.5/72 (63%) vs 2625

#12

Radjabov,T

2754 score: 60.5/90 (67%) vs 2584

#13

Shirov,A

2749 score: 50.5/69 (73%) vs 2536

#14

Dreev,A

2743 score: 86.5/142 (61%) vs 2618

#15

Milov,V

2742 score: 62/89 (70%) vs 2552

#16

Morozevich,A

2740 score: 36/64 (56%) vs 2665

#17

Kramnik,V

2739 score: 27.5/53 (52%) vs 2702

#18

Lautier,J

2737 score: 45/65 (69%) vs 2557

#19

Bacrot,E

2736 score: 56.5/91 (62%) vs 2607

#20

Malakhov,V

2732 score: 37.5/60 (63%) vs 2607

#21

Mamedyarov,S

2731 score: 76.5/116 (66%) vs 2566

#22

Adams,Mi

2726 score: 42.5/85 (50%) vs 2696

#23

Bareev,E

2724 score: 36/59 (61%) vs 2612

#24

Rublevsky,S

2724 score: 50.5/85 (59%) vs 2617

#25

Tiviakov,S

2723 score: 54/79 (68%) vs 2545

#26

Sokolov,I

2717 score: 55.5/92 (60%) vs 2601

#27

Bruzon,L

2714 score: 64/108 (59%) vs 2605

#28

Areshchenko,A

2713 score: 76/112 (68%) vs 2533

#29

Bu Xiangzhi

2712 score: 51.5/80 (64%) vs 2566

#30

Karjakin,Sergey

2712 score: 69.5/109 (64%) vs 2566

#31

Moiseenko,A

2712 score: 50/76 (66%) vs 2555

#32

Akopian,Vl

2712 score: 38/60 (63%) vs 2580

#33

Aleksandrov,A

2710 score: 46.5/65 (72%) vs 2511

#34

Tkachiev,V

2710 score: 45.5/62 (73%) vs 2497

#35

Polgar,Ju

2710 score: 20/43 (47%) vs 2717

#36

Nisipeanu,LD

2708 score: 57/86 (66%) vs 2545

#37

Motylev,A

2705 score: 67/107 (63%) vs 2568

#38

Onischuk,Al

2702 score: 52.5/83 (63%) vs 2564

#39

Navara,D

2701 score: 55.5/84 (66%) vs 2540

#40

Naiditsch,A

2701 score: 52/84 (62%) vs 2574

#41

Efimenko,Z

2700 score: 78/123 (63%) vs 2555

#42

Timofeev,Arty

2700 score: 67/107 (63%) vs 2563

#43

Georgiev,Ki

2698 score: 61.5/90 (68%) vs 2517

#44

Harikrishna,P

2697 score: 49/82 (60%) vs 2587

#45

Azmaiparashvili,Z

2695 score: 36.5/62 (59%) vs 2598

#46

Sutovsky,E

2695 score: 52.5/94 (56%) vs 2615

#47

Zvjaginsev,V

2694 score: 33/57 (58%) vs 2606

#48

Asrian,K

2694 score: 37/65 (57%) vs 2611

#49

Pantsulaia,L

2693 score: 35.5/56 (63%) vs 2561

#50

Bologan,V

2692 score: 72/120 (60%) vs 2575

If you are interested in where my formula came from, you can read about it on my Chessmetrics website. I won’t go into too much detail right here, but I would like to briefly describe the formula so you can see how simple it is. Instead of just using the raw performance rating, I “pad” everyone’s totals with 4 hypothetical draws against their average opponent’s strength, and with 3 hypothetical draws against a 2300-rated opponent. Then I add in 43 points at the end to compensate for those 3 hypothetical low-performance games. Players are encouraged to play more games so that those 7 hypothetical games (especially the 3 against 2300-rated opposition) play a smaller role in the overall average. That’s it, although I should mention that I use my simple “linear” formula for calculating performance rating, where each 10% of your percentage score counts as an extra 85 points.

So if somebody plays 13 games and has a 2700 performance rating against 2500-rated opposition, they would be credited with 13 (real) games of 2700 performance, 4 (hypothetical) games of 2500 performance, and 3 (hypothetical) games of 2300 performance. Plug that into the formula and you get a 2643 “performance score”, which also happens to be my best guess at your true strength, based only on those 13 games you played. For instance, if you notice that Levon Aronian was given a 2799 score for the year 2005, that means that his rating would actually be 2799 if we only considered the games he played during 2005! Definitely he is someone to keep an eye on in the months to come.

I hope you agree with me that this approach would be a significant improvement upon the method currently used by FIDE. We don’t need to throw out the FIDE rating system, but perhaps it is going a little overboard to directly award Candidates positions based solely on rating. Whenever World Championship qualification is based directly upon ratings, there is a huge incentive for players to try and protect their rating as a possession, rather than simply a measure of their performance. And this can lead to tactical decisions to play conservatively or to even skip events, which are not at all good for chess. As I said above, what could be better than a system where everyone starts fresh each year, and the top players are encouraged to play frequently against strong opponents, and nobody can rest inactive on their high-rating laurels since qualification spots are no longer awarded based upon ratings?


Discussion and Feedback Join the public discussion or submit your feedback to the editors


Discuss

Rules for reader comments

 
 

Not registered yet? Register