There’s always room for improvement, but there isn’t always time or budget available for improvement.
A well-reasoned explanation of why something needs improvement, preferably with suggestions for how to improve it that doesn’t break other stuff, is what I consider constructive criticism.
One of the biggest challenges I see in the new software is making it work reasonably well on small-screen devices like a smart phone. MSA on a cell phone is terrible, but it was designed in 2002 (by another former US Chess staff member), the iPhone wasn’t introduced until 2007. A proposal to rewrite MSA several years ago never made it through the approval stages.
There just isn’t a lot of real estate to work with there, and I’m TERRIBLE at typing text into a cell phone, so typos are all too easy to make.
I know Leago is using adaptive screen software, but some things just can’t shrink down that much and be readable/editable.
Another change I would like to see made is to severely rate-limit MSA access by IP address, even a fast typist is unlikely to make more than 10-15 data requests in a minute, maybe 25-30 if they are doing drill-down clicks; web-scrapers can do a couple orders of magnitude more than that.
For those applications that really need access to large amounts of data, an API with a license key for tracking and limiting usage is the way to go, the API could run on a separate server so bulk data access doesn’t slow down member access via MSA.
This would also allow us to separate bulk traffic being handled programmatically from user traffic, so we can put ad pages on MSA, we’re throwing away a half million hits a month now, though a significant percentage of those are web-scrapers. (Advertisers don’t want to pay for bot hits.)
The issue isn’t that the checkbox exists it is that it is not prominent enough. A year ago I got stuck at that screen not realizing why it wasn’t moving forward. I know other TDs who have had the same experience.
And that, on its own, is not the issue. The problem is that the whole system isn’t user friendly and is a deterrent for anyone new. Every long lived organization has had to deal with tech debt so that isn’t an excuse for bad software. And your customers (the users of your software) actually don’t owe you a well reasoned explanation with ways to fix something that doesn’t break other things. That is actually the job of US Chess as it serves its membership. I’d say US Chess is lucky that there are still people who care enough to let you know it isn’t good.
And what are all these things the organization is spending millions of dollars on that rise above this year after year?
You’d have to ask that question of the people who develop and manage the budget, I’m just a worker bee (retired). When I was the IT Director I’d develop an IT budget that took me several weeks to develop, and more than once large portions of it were not funded, NOR WAS I EVEN TOLD ABOUT IT! (I will say that was long enough ago that the ED has changed as has nearly all of the Board.)
You’re quite right that good software development never ends, you just move on to the next iteration. But funding isn’t so fortunate.
The major expense categories for the last fiscal year were $1,662,555 for tournament expenses, $1,627,586 for general and administrative expenses, $910,372 for the magazine and $695,154 for programming. For more details see the most recent audited financials.
Last year tournament revenues minus tournament expenses resulted in a loss of $368,152 as reported in the financials. And of course the loss is greater in reality because the cost of US Chess staff working at national events is not allocated to tournaments expenses.
Donations, magazine revenues, sales revenues and investment income are also major sources of revenue.
What’s Leago? Google wants to help but a Go league, plumber and baby names aren’t it.
My most annoying submission mistake was because I imported a club list roster that had a mistake (right name, wrong ID) that I didn’t catch on submission. I thought I was causing a lot of work for the staff to fix it. Now, I appreciate mistakes are common. So, anything that changes in the interface that helps is welcome. The staff will like cleaner submissions, too.
The Go league might be the right one, they’re a Canadian company that started out hosting and rating Go events (which also use an Elo-based ratings system and have many other similarities with chess events, so I think they ‘understand’ us). I believe their website is https://leago.gg
FWIW, so far this calendar year we’ve rated 21,366 sections.
We’ve made 284 player ID changes in 230 of those sections, or about 1.08% of those sections.
There have been 516 result changes made in 213 sections, or about 1% of those sections. When a result is changed it is generally changed for both players, of course, so that would translate to 258 result change pairs.
16 sections had both a player ID change and a result change.
Player ID changes may not always reflect incorrectly entered IDs, they may be changes caused by finding duplicate IDs. It appears that 160 of the player ID changes in 153 sections were due to duplicate IDs, which reduces the incidence of possible ID corrections to 0.4% of sections.
Of course if you base the error rate on the number of players in those sections (275,000) the correction rates are much smaller.
Sure, a small percentage overall but a large percentage would be a systemic error. I was counting the actual instances which are when the staff gets more work. 1 of ~200 made me feel a little better.
I can’t tell you how many times I’ve been caught by that checkbox that’s lost in the mass of surrounding text. When our club used to run just four tournaments a year, I would miss it every single time because we didn’t do this often enough to remember that we needed to check that box. And then I’d be scratching my head wondering what the heck the problem was.
An improvement to the UI would be, if the TD misses checking that box, for the page refresh to center on the checkbox with some easy-to-spot visual indication of what the problem is (yellow background highlighting; boldface text in red; big arrow pointing to what still needs to be filled out, etc.).
OK, I’ve added an ‘autofocus’ tag to the checkbox, which should position the cursor at that field and changed the header to be a bit larger, bold and red, which should make it stand out more.
BTW, if you don’t check that box, you DO get a message in red at the top of the event that says: Cannot release Event, Compliance statement not checked
Note: I don’t see any complaints in email or in the forum over the last few years asking for a change here. If nobody indicates something is a problem, it isn’t likely to get addressed.
FWIW, we HAVE received complaints from people about making headings or warnings red, apparently some people with color-blindness don’t see those fields well.
I do remember seeing (many times) that message at the top about “Compliance statement not checked.” The problem lay in pinpointing the problem line and its box in the wall of gray text.
Interesting about some people complaining about warnings being in red. It could be flashing text (although I suppose an epileptic might have a problem with that). The key is to find an effective way to point to where the issue is.
Another approach would be to set off the line that includes the checkbox, from the mass of the text on that page. Increasing the point size, the weight (boldfacing), or (especially) the leading around it might help. Just anything that will set that line clearly apart from the rest of the text.
I just submitted my first rating report after this change was made, and it is GREAT. The box that needs to be checked was obvious at a glance, hard to overlook. And as you said, the cursor was pre-positioned in the needed spot. Can’t make it more convenient than this.
Does Leago have the ability to deal with split results? There are times when a TD or US Chess committee or someone uses the “split result” to solve a game result issue; i.e., one player is assigned a win while the opponent gets a draw.
Yes, Leago is aware of the need to support inconsistent result codes.
They may affect other goals for the new system, such as the ability to compute tie-breaks so crosstables can be displayed in tie-break order. Personally, computing tie-breaks after the fact has enough other problems that I’d rather create a new upload format that includes several tie-break columns (is 3 enough?)
I agree completely that trying to have the MSA software recompute tie breaks is a bad idea. My suggestion would be to include a standings field and (if it’s defined) allow that to be used if you want to sort on standings. Remember that you could have 1st determined by a blitz playoff, but while you can input that as a #1 TB (1 for the winner, 0 for everyone else), that would knock the TB’s used for the rest of the field down a notch. And 3 standard tie breaks isn’t enough in a short tournament.
Note that FIDE now has a mind-boggling array of tie breaks; the possibilities are so complicated that there are thoughts about creating an on-line tool where you upload the final TRF file and it does the calculations. (It’s not just things like median and modified median with arbitrary numbers of excluded scores, but also you can insert head-to-head tie breaks at any stage.)