I suppose that would be considered gaming, @wojnomikolaj; and if I understand correctly what you mean, I don’t think that sort of thing is allowed. I’m referring to something that is, essentially, allowed, and yet wrong, which is why I want to change the rules to remove this loophole. If there is a clear rule, at least something needs to be broken in the first place, and there’s at least a decent change that it’ll be addressed accordingly. If it isn’t against the rules, per se, then challenges are less likely to be brought, and committees are more likely to dismiss or slap on the wrist due to unclear guidelines.
This is a very roughly drafted reform, likely to the Standards of Ethics, that I have in mind. If nothing else, I don’t see the rationale for most one on one (or small group among friends and colleagues) to be rated, other than for things like club or state titles, hence the simple suggestes guidance to avoid ambiguity.
Also, to provide an update based on the discussion below, I’ve arrived at the conclusion that rural or small state players should be incentivized to play in larger tournaments by means of strategic grants (by U.S. Chess Trust and/or other charities seeking more rural access, even perhaps non-chess ones), esp. to those who are on the verge of reaching 2200 – or maybe anywhere in the expert range. It can also be for A-class seeking 2000. Beyond titles, grants, if not already made, can be given to school teams in rural areas as well as from teams without the resources, wherever they may be.
The ambiguity here re: disciplinary action could be cleared up based on whether the player had ready access to larger or more competitive tournaments. Otherwise anti-competitive behavior (intentional or not) is occurring and at the very least should be curtailed at that point. For overt “gaming” scenarios, harsher punishments like suspension can occur.
By the way, through some further analysis I’ve done, it seems that the insider “points trading” scenarios also involve these players setting up matches or small insider events to help their friend or colleague reach 2000, benefiting all (prestige and monetary gains from clientele and programming based on broadly understood “strength” of titled players – and I call them “EGO points.”)
Lastly, I think there should be additional scrutiny of a player’s final few tournaments before achieving a new level, where incentives to dishonestly gain rating points among dishonest players and TDs (who, let’s be realistic, are out there) are highest.
As a programmer, I’m not sure how your proposals would be implemented and enforced without creating a lot of false positives, taking up staff time and severely dampening chess in less-populated areas, where most of the ‘abuse’ purportedly takes place.
If you have concerns over specific players, I suggest you document them properly and submit that to TDCC or Ethics. There is a filing fee to discourage frivolous complaints, but it will be refunded if your complaint is not considered frivolous.
I looked at several dozen recent National Masters, every one of them had multiple games in the past 6 months against high experts or masters, with scores that appeared to me to justify their 2200 rating.
Yes, pretty much. I’m not saying it doesn’t happen, but I don’t think it happens nearly as often as Mr. Bennett seems to think, not least because it’s very hard to do. I submit that there are virtually no cases of small pools that both a) have no outsiders visit and b) have no member of the pool play outside. Mr. Wiewel mentions Claude Bloodgood, but Bloodgood did this in a prison setting, he was able to achieve his peak 28 years ago and died 23 years ago. I doubt it would be possible for such a player get to 2789 today. And then, of course, was the case of the EB member who claimed to take vacations with the same group of friends and played “tournaments” with them despite none of them playing real tournaments. But again, these shenanigans started in 1992. It’s a good bet that this wouldn’t work now, and didn’t really work then. And I believe that he later admitted that the games were never played.
So yes, it’s possible, and yes, it’s a real problem when it happens, but it’s so hard to do it effectively and so few people try it that I don’t worry about it. I think the rules on TD fraud should take care of everything and the rules that Mr. Bennett wants to add are extreme overkill.
US Chess ID 12498726. US Chess National Tournament Director, FIDE and ICCF International Arbiter
People have to learn corruption very well before accusing people of corruption. You do it wrong the BAD GUY gets away.
Bloodgood was a long time ago and it was a combination of the old opponent’s rating-400 for a loss for a new player combined with the old 100 point floors that made this possible. The 100 point floors were eliminated over 25 years ago. The old provisional formula over 20 years ago. The round-up-to-integer was eliminated over 10 years ago.
Throwing games and rating games that never were played in the first place are already serious breaches of the rules. It is really, REALLY, difficult now to gain rating points by playing legitimate games against lower rated opponents.
I might be hard to understand but thank you for being correct.
Mr. Doan makes my point for me. There is no question that this has happened in the past, and that there are still people tring to do it today and into the future. But we’ve made a lot of changes to make ratings more accurately reflect the strengths of the players, and I don’t see why the existing rules aren’t enough to fight fraud. I think it is much more difficult than Mr. Bennett can imagine for a player who is not already very close to 2200 strength (or 1600) to push himself over that line by playing weaker players.
And wacky ideas like mandating an area a player must play in to become an NM or convoluted formulas of how strong an opponent one must have played, or scored against, and how recently seem like ridiculously complicated solutions to problems that are already solved.
And if a 2100 player from Wyoming were willing to construct fake tournaments to push him over the 2200 mark, what would stop him from constructing one set in Montana or South Dakota, or giving himself opponents (preferably alive but inactive) rated 2300.
US Chess ID 12498726. US Chess National Tournament Director, FIDE and ICCF International Arbiter
This notion of not scoring points (the fractions not being rounded up that has been discussed above) vs. lower rated players is a straw man argument. A 2100 player can gain about 4 points vs. a 1900 level player. That matters a lot when you’re trying to get those final points. Also, I’m not saying that tournaments need to be “fake” for the problem to apply. (To the contrary, such an arrangement would be easily flagged, partially since there are a lot of eyes of fast rising players, which is why unethical methods are naturally more nuanced than is within the scope of your analysis.) They might simply be asking insiders, which, as you’ll see above, I’ve specified. Also, as for the rural and small state argument, this has been covered and I’ve come to believe, among other things, that this should not be the focus – that whether playing in more competitive tournaments had been feasible for the new title holder should be taken into account in considering whether intentional anti-competitive behavior has occurred. And, and I’ve suggested, opportunities can be provided to these players – carrots, not sticks. So I think that’s pretty reasonable and you’re addressing arguments that aren’t here @relyea.
If statistically a 200 point spread means the favorite should score 3 points out of 4, the 2100 can hardly assume 4 rating points every time. It isn’t all that easy.
The expected performance formula says that a 200 point difference means the higher rated player should score 0.759747, so over 4 games that means a score of 3.038988, so a score of 3 may actually result in a slight drop in rating. (The multi-stage approach in the actual ratings programming was not taken into account here.)
Some years ago I did a study and actual performance tends to be slightly lower for the higher rated player than expected performance at most ratings levels. That study might be worth repeating.
Once again what is frivolous?
There are a couple of reasons for that.
-
In a standard Swiss, the R1 predictions tend to be fairly close, but in later rounds a pairing between a players with significant rating differences tend to occur when either the high rated player isn’t playing well (or is overrated) or the low rated player is playing particular well (or is underrated); or both.
-
The rating calculations assume when we are rating player A that player B’s rating is known. If you assume that the player’s rating is unknown, but centered on the current value, then, because of the non-linearity in the winning expectancy formula, the “true” winning expectancy is compressed a bit. How much that is depends upon the uncertainty in Player B’s rating—with a 100 s.d. (which is appropriate for players like Class C), that’s about a 5%; with an s.d. of 75 (Class A) that’s about 3%. Not huge, but it might shave a point off a tournament change for the someone paired mostly down.
I think in that study I compensated for #2 by only looking at games between players with established regular ratings. I could probably refine it a little further by only looking at players who have recent ratings activity, which should eliminate at least some of the stale ratings.
It’s not logarithmic? Meaning the actual ratings not the difference of the ratings.
The expected performance part of the ratings formula only looks at the difference between ratings, which mean it assumes an 800 player has the same expected score against a 600 player as an 1800 player would have against a 1600 player.
For full details on the ratings formula, see
Those are roughly the typical standard errors for established players. Most people are surprised by how much natural variation there is in ratings.
Perhaps Mr. Bennett doesn’t fully understand the impact of the floating point ratings. The issue has never been about 2100 players playing 1900 players. Those games tend to be pretty competitive, and I would assume that any 2100 player who could win 25 games in a row against 1900 players to be very underrated. The problem, IMO, was 2100, more likely 2175-2190 players, playing players rated 400 to 800 where there is something like a 99.9 to 99.99% expected score and gaining one point. Very, very easy to do that 10 to 25 times in a row to become a master.
Tournaments are fake, whether games are actually played or not, when players conspire to get specific results.
I’m honestly not sure whether Mr. Bennett agrees with me or not that players earning titles in closed pools isn’t a problem and hasn’t been in thirty years.
An 800 point difference produces an expected score of .990099.
A 600 point difference produces an expected score of .969347.
A 400 point difference produces an expected score of .909091.
A 200 point difference produces an expected score of .75947.
Yes, but I was talking about 800 or 400 rated players, not differences. What is the expectation with a 1400 or 1800 difference?
US Chess ID 12498726. US Chess National Tournament Director, FIDE and ICCF International Arbiter