USA's Jon Edwards is 32nd World Correspondence Chess Champio

Congratulations to ICCF Senior International Master and Chess Life Kids columnist Jon Edwards of New Jersey, who has won the 32nd World Correspondence Chess Championship, which finished on October 8.

iccf.com/message?message=1575

+1

Congrats to Jon Edwards. The cross table for the event is interesting. With the exception of the last-place finisher Steffen Bock, no participant either won or lost more than two games. (Bock probably had special circumstances. I ran a couple of the games he lost through Stockfish and the final positions were even.) Are these results typical of high-level ICCF events? I know that the ICCF allows players to use engines and tablebases.

yes, all the top players use the latest software/engines.

…scot…

There are typically many draws even at levels maybe 400 points below those played here. I recall a recent tournament with 54 draws and one decisive result. It’s very hard to win a correspondence game against someone whose rating starts with a 2.

Alex Relyea

It’s especially hard to win given that everyone uses an engine whose rating starts with a 3.

At OTB time controls, yes, but engine play isn’t much better after ten hours than after ten minutes, but human play can be.

Alex Relyea

The engines understand chess only as well as they’ve been programmed to understand it, which means only as well as their programmers understand it.

They are nearly infallible in positions that can be brute-force analyzed, but not all positions meet that criteria.

I think it was Picasso who said that anyone can draw a line on a piece of paper, it is knowing where to put the line that makes one an artist.

For example: https://www.iccf.com/event?id=91170

Alex Relyea

[quote=“nolan”]
The engines understand chess only as well as they’ve been programmed to understand it, which means only as well as their programmers understand it.

Nolan

I think programs like Alpha zero have made the above statement out of date and congratulations to Mr. Edwards on a great achievement

I’m not familiar enough with the Alpha Zero engine to know if it’s doing something different than other engines, but with the engines I have looked at there still comes a point when a decision has to be made whether position X is better than position Y, and what makes it better. Knowing why it is better is what makes it possible to exploit the advantages. That’s not something that, as far as I know, can be done solely by brute force yet, though with quantum computers that may become feasible.

I had a variation of this discussion with Hans Berliner at an ACM tournament in around 1971, for what it’s worth.

Computer technology has evolved a bit since 1971. As I understand AlphaZero it has NO input from humans beyond just the rules of play. With the rules in place it proceeds to play a gazillion games against itself, remembering the outcome of all its decisions.

Which is the definition of ‘brute force’, a process demonstrated by Samuels with his checkers program. I actually wrote a game-playing program myself back in the early 70’s. (It played a particular form of solitaire, a form I’ve never won, though my wife has–once.)

My point is that brute force programs don’t really KNOW (or care) why move X is better than move Y, not in the sense that a grandmaster does (or thinks he does.) They just know the end result of the chain of moves they’re able to analyze.

That’s the fallacy in most AI projects. There’s no intelligence or self-realization. Will there be, some day? Possibly. I’m not sure that’s a positive development, though.

Nothing’s changed since the 70’s, computers have just gotten a lot faster is all. But when the original paper describing database theory came out in the 50’s, it was said to be unimplementable, because the computers of that era just weren’t fast enough for it to be practical and affordable. When faster computers became available 20 years later, Larry Ellison became a billionaire.

Mike, I am sorry but I think your post is nonsense. When it comes to chess programs, everything has changed, just in the last few years.

First of all, I acknowledge that you are far closer to being an expert in computer science than I am. However, I think you have not understood (or more likely failed to research thoroughly) what the new AI chess programs are doing.

Conventional chess programs analyze all legal moves to the limits of their search horizon, and then use their provided knowledge base to evaluate all the positions, with the highest-scoring move being played. The new AI programs do something different. They have no provided knowledge base (other than what constitute legal moves). They have a search horizon too but they don’t care as much what is going on at that point. Instead, across millions of practice games, they look for patterns – types of moves that have a high success rate, not at the search horizon but at the end of the games. It turns out that there are lots of moves – typically material sacrifices for long-term pressure – which, while they will be evaluated negatively at the search horizon of a conventional program, will nevertheless have a high rate of success over the longer term. So, the new programs are able to play certain moves “on faith”, knowing that they will likely succeed at a point far deeper than any concrete analysis can reveal.

The value of this new strategy has been proven by the fact that AI chess programs are starting to dominate the conventional ones:

• 2017 — AlphaZero, a neural net-based digital automaton, beats Stockfish 28–0, with 72 draws, in a 100-game match.

• 2019 — Leela Chess Zero (LCZero v0.21.1-nT40.T8.610) defeats Stockfish 19050918 in a 100-game match 53.5 to 46.5 for the TCEC season 15 title.

In the meantime, there has been a big shakeup in the play of human GMs, who have not been slow to learn this lesson. Here are links to a two-part article from the ChessBase web site, which discusses in detail how human GMs have been influenced by these new engines:

en.chessbase.com/post/how-the-a … -chess-1-2

en.chessbase.com/post/how-the-a … -chess-2-2

– NM Hal Terrie

What’s interesting is that the “winning” game was lost due to a flat-out piece drop in what was basically an equal position.

Pattern recognition in chess engines is not a new development, I think it was the UCLA team that was using that approach in the early 70’s.

This does not change the fact that there has been a major breakthrough in this area just in the last five years. The published facts speak for themselves.

– Hal Terrie

Luck counts.

This ancient saw was already debunked by Deep Blue in 1997. None of the programmers was stronger than Expert, but it beat Garry Kasparov.

TBH, IBM also hired some grandmasters, including Benjamin and DeFirmian, to help with creating the opening book and diagnosing problems. But those guys aren’t programmers, and in any case, none of them was close to as strong as Kasparov at the time.

This topic was automatically closed 730 days after the last reply. New replies are no longer allowed.