USCF Rating Deflation??

I saw a post to a chess group by a John Coffey who claims that the USCF has been deflating the rating system? I have never heard of this, any truth to it? I’d sure love to claim that “yeah i’m really an expert” :slight_smile:

“but the USCF has been deliberately
deflating the rating system by about 100 points over the last 10 years to
bring ratings in line with FIDE ratings.” (John Coffey)

Full text groups.msn.com/UtahChess/general … 1115983868

Its’ true in states with small USCF memberships, the ratings would be much lower then states with a larger population. If we can take two different groups of chess players and start them all off as UNR. One group with 300 players and the second group with 3,000 players. If the members of the two groups only play in tournaments with each other, the group of 300 players will in time have a lower high score player then the group with 3,000 players.

The person talking about the rating deflation is from the state of Utah. The state has a very active chess activity, with a large population of chess players. With other states, like Alaska, Montana, Nebraska, North Dakota, South Dakota and Wyoming: has a very small population and a lower top ratings then say California, New York, Texas.

Only Nolan would have records of the average rating for the top 25 players in each state. If you want to have a change to become a Expert, would have better luck at the Mashall Chess Club then a weaker area (rating) of players. Ratings are not always equal to each other in different areas of the nation. There are class A players that should be a Master, but they never can become a Master if they are always on the top of the roster.

Its not a question of rating deflation, its’ a question of active population density. If the USCF had in one year 250,000 players active in tournaments with a average of 50 games in a year. Just think what a class A player could pick up in rating points in that give year. If the USCF wants rating inflation, they need to get more active players. If the adult membership declines, so will the ratings as well.

First, don’t listen to anything Doug Forsythe says about ratings. As usual, he’s off topic and way far afield.

In 1972-1973, as part of the Fischer boom, zillions of weak players entered the rating pool. Accordingly, the “average” rating within USCF dropped dramatically – as you would expect.

Someone without much mathematical acumen, or common sense, decided it was important to raise this “average” rating to what it had been before the Fischer boom. To this end, they introduced extreme versions of bonus points and feedback points. The result was massive rating inflation – perhaps 200 points per player. For some reason, it was not recognized that if the average strength is lower than it was, the average rating should be, too.

So I suspect John Coffey was right, for a while. There was a period of deflation, whether deliberate or not. It was achieved by eliminating bonus and feedback points, two devices designed to correct the rating deflation caused by the learning process.

This ended a year or two ago with the new rating system, but ratings are still inflated, compared to FIDE, by about 50 points.

The ratings committee is keeping a close eye on the operation of the new rating system, to monitor inflationary or deflationary tendencies. For example, they recently decided to keep the bonus floor at its “temporary” level for a while longer, to help combat deflation.

There is always a lot of upward political pressure on the rating system. Everyone and his monkey’s uncle thinks he’s under-rated. That doesn’t make it so.

If you want that big 2 as the first digit of your rating, improve your chess – don’t ask that ratings be re-inflated so that EVERYBODY will start with 2 and your rating will be meaningless.

Bill Smythe

I’m not impressed with Mr. Coffey’s argument. In the first place, “rating deflation” is slippery term which usually means “I think my rating should be higher.” Since we can’t freeze a player from thirty years ago and defrost him for comparison, statements as to what rating “should” correspond to what playing strength are pretty much empty of meaning.

In the second place, the core of Mr. Coffey’s position is summed up in “Glenn Peterson (sic) told me that 20 years ago the average rating in
the USCF was above 1550, and then 10 years ago below 1400 and
now it is below 1250.” What Mr. Coffey doesn’t seem to realize (though Glenn should) is that from about 1979 to 1983, rating inflation really was out of control. An analysis of USCF crosstables from that period shows that the rating pool was increasing by an average of nearly two points per player per game. If you’re allowed to cherry-pick your data points, you can prove anything you want.

To be fair to Mr. Coffey, he does raise an interesting question: Does a steady increase in ratings help keep players interested in the game? In other words, should the rating system be seen primarily as a promotional tool, rather than a predictive model? (Of course it is both; the question is one of emphasis.) I think his position is completely wrong, but it’s a subject worthy of debate.

As soon as Mike Nolan fixes a ratings snafu in one of my tournaments I’ll probably be at or above my all time high rating. Of course one person is a small sample size, but I actually don’t study chess anywhere near as much as I did back when I hit my high a decade ago when I was probably 20-21 years old.

I know a lot of people who are at rating levels lower than they used to be, but for most of them it’s age catching up or something else, because I can tell that they’re not playing that well compared to how they used to play.

Claims like this are impossible really to measure … but here is a natural factor for rating deflation: Abysmal players play in their first tournament, and then get better. They either leave quickly (and the average rating has gone down a little), or they stick around and improve (and take rating points from other people). The “new” rating system is designed I think to minimize this (in the old days when it was a zero sum game it was awful).

I think the following two factors more than offset that (so I would think there should be a slight inflation):

  1. Bonus points only work one way (up)
  2. Rating floors for us old folks artificially inflate our ratings and keep pumping free points into other people’s ratings.

Any statistical analysis should completely throw out the affect that people with rating floors have on the system. I don’t know why we stopped, but I liked the system where we all had a letter that indicated our highest rating category, and that was used to invalidate us for class prizes and such.

I know personally I was an expert for about 6 games, and now I’m struggling to keep above 1900. But I also didn’t play a rated (or otherwise) game for 12 years and I know I’m nowhere near as sharp as I was in 1993. SO put me down for a big over-rated possibilty!

Douglas, I don’t remember what the particular snafu was at this point, if it was a data error, Walter Brown is in charge of data corrections.

If it was a programming error that has been corrected, it’ll get picked up at hext rerate.

It was this event right here.

uschess.org/msa/XtblMain.php … 1-20109893

David Saltamachia is the player in question. He used to play and had a 16XX rating (1618 maybe?). He was entered into the system correct and the first time the tournament rated it rated him as a 16XX player. When the re-rate for the supplement happened it re-rated it with him as an unrated, and I think I gained 4 less points in the tournament.

Here is another tournament with almost the exact same problem:

uschess.org/msa/XtblMain.php … 1-20109893

Frank B Rives was another 16XX player 20 years ago. In this case I think his old rating wasn’t loaded in correctly.

Douglas, those are both issues that Walter Brown should be able to make sure are correct for the next rerate. His e-mail is wbrown@uschess.org.

Thanks - I sent him an email.