30/30 G/30 - Standard or Quick?

There are many G/30 events, nationwide, where at present 5 minutes are deducted when there is a 5-second delay. Scheduling problems necessitate something like this. Players want these games to be regular-rated (i.e. dual-rated).

It would not be good to throw the baby out with the bath water by suddenly making such a time control illegal.

On the other hand, players should not be encouraged to use analog clocks to get the “extra” 5 minutes.

So, G/25 d/5 should be legal as a regular-rated event, and games played with analog clocks should (if the TD so decrees) be played at G/25 (not G/30) and still be regular-ratable.

Bill Smythe

I suspect allowing analogs to be set at G/25 and still have the game be dual rated will not be acceptable to the Delegates.

What is your specifc disagreement? Is it that G/25 should not be rated as Regular while G/30 should? Does 5 minutes make that much difference? I’d vote for at least G/60 to be the quickest time control for Regular time control.

Then you would be voting to take nearly 2/3 of the games the USCF currently rates under the regular rating system out of the regular rating system, because that’s how many of those games are dual rated.

Of those dual rated games, around 16% involve players over the age of 19. (54% of the regular-only rated games involve players over the age of 19.)

It is also worth recalling that for as long as the USCF has been rating Game/30, Game/30 has ALWAYS been regular rated, those games didn’t start to be dual rated (ie, included within the quick rating system) until 2001.

Shouldn’t the real question here be, “At what point does the predictive value of the rating system break down?”

If the predictive value of the rating system on average is just as good at game 30 as it is at the slower time controls then it shouldn’t matter if game 30 is regular rated. The same argument could be said for game 5.

Then once you have decided then it is up to the players if they want to play at those faster time controls or not.

I think there must be some time point where time affects the outcome and that point may vary from individual to individual.

In the game 30 event we started two Mondays ago there were no upsets out of 12 games on the first night.
The second night of play there were 3 1/2 with 2 of those being against the same player.

I think the ratings committee has run some tests based on regular-only/dual/quick-only rating, and for the most part the actual results are similar, and I’ve looked into it a couple of times as well.

As I recall, if you segment the games by ratings classes (eg, look at games involving, say, 1800 players), there were fewer draws at faster time controls, but more upset wins. I’ll see if I can dig up that data.

OK, here’s a graph showing games by A players during calendar 2009, showing actual performance versus rating (in 25 point intervals.) for games that were regular-rated only, dual-rated or quick-rated only. (I took out games where the ratings difference was 1000 or more points.) This uses pre-event ratings.

uschess.org/datapage/a-players.png

As you can see, the three lines are very close as long as the ratings difference is less than about 400-500 points. Further out, lower numbers of games (especially for quick-rated only) may skew the data.

I’ve also shown the expected performance formula for up to a 600 point ratings difference, so you can see how all three actual data plots diverge from expected performance.

It looks a little strange. For -400 it looks like the expected value is 0.1 and for +400 it looks like 0.9.

For dual-rated actual results, -400 appears 0.08 while +400 is 0.85, which would mean that the average total number of points for games with a 400 difference comes to 0.93 instead of 1.

From -200 to +200, both the higher-rated and lower-rated seem to be performing less than expected (except for a couple of small spikes in the quick-rated graph), which would translate to each game having an average of less than 1 point scored between the two players (for no rating difference both players are averaging 0.48 points instead of 0.5).

I’m not sure if double-forfeits would have that much of an impact, particularly with forfeited games not being ratable anyway.

Keep in mind that the games for A players against people rated 400 point BELOW them are different than the games against people rated 400 points ABOVE them, so the sum of the actual performance at -400 and +400 are not necessarily going to add up to 1.0. (as they do on the expected performance curve.)

What I did was to look at the subset of all games where at least one of the players was rated between 1800 and 1999.

There also may be more games against 400 point lower rated opponents than against 400 point higher rated opponents.

Non-played games (eg, forfeits) are not rated and were not included.

BTW, at 400 points the higher rated player’s expected performance is .9090909.

Here’s the same type of chart for games played in 2009 involving D players:

uschess.org/datapage/d-players.png

As you can see, the D players appear to diverge from the expected performance curve more than A players did. I’ll leave it to others to suggest why that should be the case.

So if you did compareable charts for say 1993 before dual rating and maybe even before Game 30 was rated as regular, I wonder how different they would be?

I think Game/30 dates back to about 1988, I seem to recall that discussion at the Delegates Meeting being what led to the creation of the Quick system, which started in late 1991. (It took a while for the USCF to get all the pieces in place to handle QR events.)

BTW, the A player data for 1993 looks pretty similar to the 2009 data:

uschess.org/datapage/a-1993.png

And here’s 1993 vs 2009 for regular ratings:

uschess.org/datapage/a-1993-2009.png

So can we safely say that the USCF rating system is as accurate as it ever has been?

I guess that depends on what you mean by ‘accurate’. About all those graphs say is that there has been very little change over the years in how well players (in this case A players) do compared to their expected performance.

To some extent this is a self-fulfilling prophecy, as someone who performs well above or below expectation gets a rating change that should move him or her closer to the expected performance curve.

I suggest people read the Rating Committee’s reports, Mark Glickman has them on his website, glicko.net, going back to 1993.

However, going back to the original question, it does not appear that the time control has much of an impact on expected vs actual performance in the aggregate. (Individual players, well, that’s a different question.)