In this thread I’d like everybody to stop talking about whether, why, or by how much quick ratings for most players are lower than their regular ratings, or about what to call this difference (“deflation” or something else), or about whether this difference is a problem. If you want to debate these points, please do so in another thread, such as Eliminate Dual Rating or A Rating Related Question or Re: Quick Ratings or elsewhere. If I see any posts here that violate this request, I will ask the moderators to remove them as off topic.
This thread is about how to better align quick ratings with regular ratings. Having almost everybody’s quick ratings lower than their regular ratings, often by 100 points or more, creates (at the very least) a serious PR problem. Players often display a sour grapes attitude toward their quick ratings, and organizers of quick events frequently use regular ratings instead of quick ratings for eligibility and prize purposes.
I do not want “everybody’s quick rating to be the same as their regular rating”. I want the average to be about the same. There should be about as many players whose quick ratings exceed their regular ratings as there are vice versa.
This goal is best achieved if we view the quick rating system as a satellite system to the regular rating system. In other words, quick ratings should revolve around regular ratings.
My fundamental idea is that, whenever player X enters a quick tournament, his opponents’ regular ratings, rather than their quick ratings, should be used to calculate player X’s new quick rating from his old quick rating:
- A. Each player’s post-event quick rating should be calculated from his own pre-event quick rating, his event score, and his opponents’ regular ratings. That way, opponents’ stale ratings would not drag down the system, because they would not be used.
- B1. A player who is new to quick chess should be handled the same way as one who is new to regular chess. In other words, his initial quick rating should be calculated from his event score and his opponents’ regular ratings.
- B2. Alternatively, a player new to quick chess could have his pre-event quick rating initialized from his own regular rating (perhaps based on 10 games), and then his post-event quick rating would be calculated as for a rated player, i.e. from his pre-event quick rating, his event score, and his opponents’ regular ratings.
I’m not sure which of the last two would be better, B1 or B2.
But the above may not be enough. It might also be desirable, whenever an event is quick-rated, to take additional direct action regarding each player’s (possibly stale) pre-event quick rating:
- C1. Define a player’s pre-event quick rating to be completely stale if the player has played 40 or more regular-rated games since his last quick-rated event.
- C2. Conversely, define a player’s pre-event quick rating to be completely fresh if the player has played 20 or fewer regular-rated games since his last quick-rated event.
- C3. In between, define a player’s pre-event quick rating to be partially stale if the player has played between 20 and 40 regular-rated games since his last quick-rated event. Define a player’s staleness factor to be 0% at 20 regular-rated games, 100% at 40 regular-rated games, and varying linearly between 0% and 100% as the number of regular-rated games varies between 20 and 40.
- C4. At the start of the rating process for each quick-rated event, adjust each player’s pre-event quick rating to be pR + qQ, where R and Q are the player’s regular and quick ratings, p is the player’s staleness factor, and q = 100% - p.
Oh, and:
- D. No more dual ratings. (It’s not clear whether dual ratings have helped or hurt, but with the above plan they should not be necessary.)
TESTING: Go back to January 1, 2017. (It should not be necessary to go all the way back to 2004.) Zero out all quick ratings, and start from scratch at January 1, 2017. Re-rate all quick events played since then.
Look at the results to see if they meet the desiderata stated in my third paragraph at the top of this post. If not, make (at least some of) the following modifications and try again:
- E1. Make the adjustment described in C4 only if the player’s (unadjusted) pre-event quick rating is lower than his regular rating, i.e. only if Q < R.
- E2. Replace all instances of “his opponents’ regular ratings” in A, B1, and B2 with “his opponents’ regular or quick ratings, whichever is higher in each case”.
- E3. Rating floors should apply to quick ratings too, i.e. any player whose quick rating floor is lower than his regular rating floor should have his quick rating floor raised to make it the same as his regular rating floor.
- E4. Check the formulas involving bonus thresholds, bonus multipliers, K-factors, multiple passes, etc, to see if there are any differences (stated or indirect) that might be holding quick ratings down, and make any necessary corrections.
If any of these modifications seem to be going too far (i.e. if the quick ratings now appear inflated instead of deflated), cut some of them back and try again.
IMPLEMENTATION: Go back to January 1, 2017, the same date as for testing. Zero out old quick ratings, just as in the testing stage. Quick ratings may not be important enough to warrant going back any further:
- F1. Any players who acquire quick ratings in this rerate will have these new quick ratings listed on MSA.
- F2. Any players who do not acquire quick ratings in this manner will continue to have their old quick ratings listed on MSA, but these old ratings will be listed as P/0 (provisional based on 0 games). The P/0 listing will have two purposes: (a) to alert the MSA rating calculation programs that these players should be treated as unrated in the quick system; and (b) to alert tournament organizers that they may wish to use these players’ regular ratings, rather than their expired quick ratings, for pairings and prize purposes. (Of course, organizers have the right to use either rating anyway, but the P/0 listing will give them a little extra guidance.)
As always, testing is important. Test, examine the results, adjust the formulas slightly, test again, etc. Results from such empirical methods can be better than those from adhering to any particular person’s opinions of what caused the problem in the first place.
Bill Smythe