Speed of Chess engines.

I take this from the Rybka Forums.
[I added the italics]

I find it rather interesting. I didn’t realise that Deep Blue only had 1.15 billion transistors and only ran at 120mhz. Although the speed doesn’t suprise me, since it was over 10 years ago, its a stark reminder just how far computers have come. We’re not far away fron consumer level CPUs breaking the 1 billion transistor mark, certainly doable within a couple years.

Other parts of the discussion in the thread (on Rybka forum), talked about how much chess engines have improved over brute force techniques, which is what the primary tool used to increase engine strength in the 1990’s.

I think Intel has already announced that their next generation CPUs will have well over a billion transistors in them.

I thought I heard that too, but wasn’t positive. I think the core i7’s are well over 900 million transistors, if I remember correctly.

Of course, it take more than brute force to make a good engine. I remember a while back… at least 5 years, maybe longer, Deep Junior using a single 4 *cpu machine won a major championship, and one of the competitors was a brute force engine running on some souped up cluster, and the cluster didn’t actually do all that well.

Still, there’s no denying that a great chess engine will improve with more processors, although the curve tends to flatten the more cores you add.

*This was before multi-core processors

Here’s an interesting paragraph from Wikipedia about Deep Blue:

Heh, didn’t know that till I just read about it today.

Here’s an excerpt from a press release issued at the Intel Developers Conference last month:

They’re packing more and more into the real estate for sure. I wonder, on the consumer level, what the 22nm will mean. Since Intel is going to be putting the memory controller and GPU functions on the CPU in the near future, it makes me wonder it the chips themselves would lend to much improvement over todays chips… as far as chess engines are concerned.

I think Intel will still end up making server chips that are purely based on cores, but how many consumers will be going out of their way to buy a server style computer? It will most likely mean the high core counts/cluster style chess machines will still be out of reach of the average person.

Looks like GPGPU type computers will remain a niche product. (General Purpose GPU). Thats taking a video card and using it for applications other than video. Lends itself well to certain stuff, but programmers seem to think its not really suited for chess. -Invidia TESLA is an example of a GPGPU. Its not a consumer type item, and its pretty certain it will remain a niche product for business that can program their own applications for it.

A number of years ago I was at a conference where Andy Grove, then the CEO of Intel, spoke and took some questions.

Someone asked him about whether there was an end to the Moore Curve in sight due to the laws of physics.

He said they didn’t expect that to become an issue until they got well below 5 nanometers, and even then vertical stacking will give them many more years worth of chip density improvements. (Imagine 1000 processors on a chip.)

I first learned to program on a big Control Data computer back in the 60’s. Recently someone has come up with an emulator for that computer, a CDC 6600.

When the emulator program is run on a fairly recent generation PC, it will run faster than that multi-million dollar beast did.

Slightly OT - I remember a Scientific American article in the late 1970’s arguing that computers would soon think more like humans. Heh.

It’s thirty years later - we bow before the power of minimax - but computers are still nowhere near a human’s “artificial intelligence.”

There’s a wonderful position, also from the Rybka Forum:

tinyurl.com/y8axo3z

Of course, engines can make it to ELO 3100 without having a clue about this position… But it might be interesting to revisit the more human approaches with all this extra computing power. (Especially since each ply offers diminishing returns…)