Quick tournaments (G/5 - G/29) only you do not need to keep score.
If the tournaments were un-rated, the USCF would not get the rating fees! Quick tournaments are your fun tournaments, now you understand why!
Quick tournaments (G/5 - G/29) only you do not need to keep score.
If the tournaments were un-rated, the USCF would not get the rating fees! Quick tournaments are your fun tournaments, now you understand why!
Doesn’t WinTD have a function when switching to quick ratings that uses the regular rating if a player doesn’t have a quick rating? I thought it was common practice to use someone’s regular rating in a quick tournament if they didn’t have a quick rating.
I’m also of the opinion that quick ratings should be used for pairing and prize purposes in quick (G/29 or below) tournaments. Otherwise why have them, and indeed why bother trying to incorporate a blitz only rating that people have been chomping on about?
As for the big tournaments, I know the major tournaments here in Vegas use either the highest of your regular or quick rating to determine the ratings and prize distributions for their blitz tournaments.
Being one of the apparently few people in the land that has a higher quick rating than regular rating it really doesn’t affect me, however I think Mike claims that it probably does affect a high percentage of players that participate.
I’m guessing the major tournaments do this to prevent sandbagging, but if most player’s quick ratings are proportionately lower than their regular rating then does it really affect matters that much?
However, if they did not do this then I’m assuming it would adversely affect players without a quick rating since their regular rating would be used and this has shown to be (usually) higher than what their quick rating will eventually be. And yes, this is what WinTD does (see para 1 above), and what I do at my tournaments. Guess if you’ve not got a quick rating then I’d rather have some estimate of your playing strength and not call you unrated!
Chris Bird
I don’t really know why most players have a lower quick rating than their regular rating, Chris.
Both ratings systems use essentially the same formulas, the only difference being that a quick rating that is ‘seeded’ from a regular rating starts out as being based on up to 10 games while a regular rating that is ‘seeded’ from a quick rating always starts out as being based on 0 games.
For several years (2000 through late 2004) players whose first event was dual rated wound up with their quick games in effect being rated twice, since the quick rating was seeded from the player’s regular rating. (That’s also why those players have an incorrect game count on their quick rating.)
I don’t think either of those are enough to cause over 90% of players with both ratings to have a much lower quick rating, though.
Possibly because of rating floors? Players who were on (or near) their floors at regular chess tended to pump some points into that system, while they would have to lose 200 points at quick chess before hitting their new floor there.
I suppose that’s possible, Tom, but of over 91,000 players who have played a regular-rated game since 1/1/2004, only 11,425 have a ratings floor, and of those only 748 (6.5%) are currently at their ratings floor. Further, only 395 of the regular-floored players have played in games that were quick/dual-rated.
Can a small percentage (less than 10% of the quick-rated players) really be deflating the entire pool that much?
Or – weren’t there some regular-rated features that weren’t implemented initially when quick ratings began? Bonus points might be an example. If something like this happened at the outset, the effects would still be noticeable.
Bill Smythe
Is 748 the number of people right on the floor of XX00? If so, it might be interesting to know how many have hit their floor in the last N years. From my own personal experience, it’s not uncommon for a rating to float between the floor XX00 and XX50 with occassional bottom outs.
I don’t know if there was a time when quick ratings were being computed significantly differently than regular ratings.
I do know that at the time I was writing the new programming, the algorithms were identical except for the documented differences in how the initial estimate for a player is seeded from his ‘other’ rating. I back-tested the rating of thousands of events to make sure the new program was doing things the way the old one did, any undocumented differences would have shown up during that testing.
.
There have been 27,774 players with a regular rating floor of 1400 or higher since late 1991. Of 857,764 ratable results (players X sections) for these players with a floor, only 20,729 (2.4%) have resulted in a player being at his rating floor, and only 2359 (8.5%) of the players with a regular ratings floor have ever been at that floor.
There have been 8553 players with a quick rating floor of 1400 or higher since 1991. Of 134,546 ratable results (players X sections) for those players with a floor, only 2477 (1.8%) have resulted in a player being at his rating floor and only 270 (3.2%) of the players with a quick ratings floor have ever been at that floor.
I don’t know if breaking the data down by year would provide any more useful data. For games played in 2005, 1042 of the 8763 players with a regular ratings floor were at their floor at least once (3831 out of 60,224 ratable results) and 138 of the 3124 players with a quick ratings floor were at their floor at least once (701 out of 20,852 ratable results.)
Something must have happened somewhere along the line (probably at the outset of quick ratings, under the old software), because I’m sure almost everybody’s quick rating is lower than their regular rating, typically by almost 100 points.
Maybe Mike can do one of his information-packed charts to compare regular ratings to quick ratings for players who have both.
Perhaps the ratings committee could do something to bring the two systems into line – maybe even something as simple as adding 75 points to everybody’s quick rating, on a one-time basis. (Historically, I have loathed such artificial contrivances for regular ratings, but for a regular-quick difference of this sort, it makes some sense.)
Bill Smythe
How about maybe that player’s quick rating reflects what strength they play at under quicker time controls? Isn’t one of the arguments of a longer time control that the “quality” of the game is higher?
If player’s quick ratings are lower than their regular ratings then maybe that is exactly where their quick rating should be and a good comparison to their regular rating of how the quality of their play drops due to having less time.
My quick rating is slightly higher than my regular rating and so this isn’t the case for everyone, but thanks for suggesting to make me a Master at the quick chess… have being stuck at Expert for a while now and it was starting to bug me!
Chris
Possible reason quick ratings tend to be lower:
Could this be the reason?
If there is a consistent, across-the-board bias between the two rating systems, then maybe we should make a one-time adjustment. If the reason I gave causes the bias to continually re-appear, then maybe some “inflationary” component should be added to the quick ratings to account for it.
Gee, and I always thought I wasn’t as good at quick time controls. Maybe it was just a bias in the rating systems!
I’ve forwarded the data below to the chair of the ratings committee for his interpretation of it.
This shows the average regular and quick rating by membership type for people who have established ratings in both systems as of the December 2005 annual ratings list for those players who have joined the USCF since 2001.
I don’t know if the tendency for the difference between the two ratings system to diminish for more recent new members is indicative of anything.
[code]memtp year count reg quick diff
E 2003 55 1562.1 1514.0 48.1
E 2004 101 1616.1 1577.0 39.0
J 2001 252 1387.6 1272.8 114.8
J 2002 316 1328.3 1229.8 98.6
J 2003 289 1199.1 1120.6 78.4
J 2004 237 1112.0 1049.6 62.4
J 2005 39 1093.7 1056.2 37.6
L 2002 28 1938.5 1880.7 57.9
L 2003 33 1895.3 1850.3 44.9
L 2004 86 1892.3 1850.8 41.6
O 2001 204 1017.4 957.1 60.3
O 2002 444 883.7 836.9 46.8
O 2003 546 781.3 739.8 41.5
O 2004 615 696.2 664.1 32.0
O 2005 120 630.8 607.1 23.7
R 2001 118 1640.2 1544.7 95.5
R 2002 200 1630.2 1549.5 80.7
R 2003 307 1679.4 1617.6 61.8
R 2004 1037 1774.6 1718.6 55.9
R 2005 154 1729.8 1684.7 45.1
U 2001 320 1185.9 1087.0 98.9
U 2002 491 1063.7 987.3 76.4
U 2003 550 926.1 868.3 57.8
U 2004 583 849.7 805.7 44.0
U 2005 108 705.8 680.5 25.3
W 2001 63 1219.6 1121.2 98.3
W 2002 95 1057.8 1002.8 55.1
W 2003 85 925.4 890.2 35.2
W 2004 103 858.1 823.3 34.8
W 2005 25 879.6 842.2 37.4
Z 2002 41 1904.1 1861.5 42.6
Z 2003 36 1842.5 1771.9 70.6
Z 2004 96 1844.4 1795.3 49.1
[/code]
Here are the totals for all players listed in the 2005 Annual List with established regular and quick ratings:
[code]memtp year count reg quick diff
E all 248 1570.3 1528.5 41.9
J all 1824 1386.4 1276.6 109.7
L all 283 1854.7 1812.2 42.5
N all 31 1731.2 1629.0 102.1
O all 2153 827.3 782.9 44.4
R all 2852 1712.5 1639.3 73.2
S all 36 1685.3 1616.1 69.1
U all 2411 1015.1 944.8 70.2
V all 25 1929.6 1841.6 88.0
W all 521 1109.3 1046.4 62.9
Z all 339 1809.3 1755.5 53.8[/code]
Could it just be that they’ve played fewer quick games so that their quick rating hasn’t fallen yet? It might be useful to know the average number of quick games played by each of the groups.
This explanation is highly plausible – the most rapidly improving players play in quick-rated events in higher proportion than other players.
But THAT explanation is not plausible. Rating systems measure RELATIVE strength. If player A is better than player B at regular, odds are A is also better than B at quick.
In any case, with regular ratings used to seed quick ratings, the “lesser ability” argument falls flat on its face. If two players rated 1800 regular begin their quick play by playing each other, how are BOTH of them going to lose quick rating points, even if they are only 1700 strength at quick?
Bill Smythe
Question: Is it possible for quick ratings to be lower simply because most people play relatively less well at the faster time controls, with a few people who play relatively as well or even relatively better at faster time controls?
I remember Koltonasky(spelling?) “Grand Master of the knight tours” talking about playing blindfold chess on television years ago in a pbs series. (like 1968)He remarked that the blindfold game he was playing and commenting on was “unfair to his student” because He, the Grand Master, was a stronger and more agressive player when blindfolded.
Could a similar effect be operating here?? A difference in the shape of the ratings distribution might indicate that a redistribution of relative strength occurs at faster time controls.
Charlotte
It IS barely possible (though I don’t believe it) that fewer bonus points tend to be awarded for quick play. That would also provide a reason for the bias. (Maybe the results tend to be a little more random, so there’s less chance of a truly outstanding performance even if your ability has outstripped your rating). Like I said, I don’t really think that’s the cause.
Wasn’t quick chess originally rated at 60% of the change of regular rating? Were there bonus points during that period and if so, were the requirements lowered proportionally from those used for regular ratings.
22.1% of quick rated results since 1/1/2005 earned a bonus, 19.2% of regular rated results earned a bonus.
However, that may not be indicative of much, because most scholastic games are dual-rated.
Mark Glickman has suggested some ways to analyze the data further, if I have time I’ll work on that over the weekend.
.
I don’t believe that has been the case since at least 2001 when the current formulas went into effect. That probably doesn’t explain why there is a bias even among players who didn’t begin playing rated chess until 2004 or later.
This may only illustrate the futility of trying to make supposedly independent ratings systems relate to each other, a lesson that may apply to comparisons between the USCF and FIDE ratings systems.