4/6/2006 – The reader mail created by our two ChessBase Workshop columns
on distributed computing and the "solvability" of chess continues to
pour in unabated. Our intrepid columnist once again braves the
slings and arrows of outrageous fortune (or at least those of our
readers) to present some e-mail commentary generated by the
topic, as well as his replies. Workshop...

We'll provide the reader e-mails in regular type. When I respond, I'll do so in italics (to differentiate between my nurgling and the more intelligent comments of my readers).

----------

when the last time i read about man vs machine, and machine win the game, i think that still unfair game, machine will get strong and more strong cause machine is upgrade and upgrade and more than that machine is product from many person, not just one person, and it play against one person GM, that is unfair game, if machine play against five person GM in one game that will be interesting match. And like Garry Kasparov against the world, that will more interesting match. back to your article, well we still don't know if computer will get more speed and more memory capacity to store every best move in next-next years. But i know if that happen it is not posibble to know who will win the game if one thousand GM against one GM. it just similar like a car against one human in running competition. But it will happen if human who create machine is upgrade and more upgrade to create more speed and capacity for computer.

Alexander Newbmon

----------

One mistake in your exposition: you say the game is a forced win for black or white, but it may be a draw with best play.

Some other comments:

1) The number of positions is not equal to the product of the number of moves for black times the number of replies for white times the number of further replies for black, etc., since many sequences will produce the same position and also as the amount of material diminishes past a certain point there will be fewer possibilities. Finally, some paths will lead to threefold repetition, mate, or stalemate, so those branches stop at that point. Having said that there are still a lot of positions!

2) There is presumably a way to axiomatize the game so that instead of inspecting positions one could try to deduce a conclusion from the axioms. I suspect this too would be forbiddingly difficult in terms of time required, but at least it's an alternative approach.

3) Suppose chess were a forced win in 12 moves: then we might be able to find it on some future computer. Of course this is very unlikely, but it leads to an idea: since we don't know how far down the tree a forced win may lie, we really don't know how long it may take to find it. We can "cheat" by looking for a forced material advantage after, say, 15 moves, then subject those lines to further calculation. There's got to be a better approach than brute force examination of positions. Even so, I suspect you are right that the problem can not be solved in a reasonable time, but my point is that there is an outside chance that it can.

4) It would be an interesting mathematical exercise to create a game like chess that was a forced win in 15 moves, and see what techniques were necessary for its creation.

Peter Henderson

Charlottesville, VA

*Actually, I don't think I said that the game is a forced win for either side -- that's just the premise used by people who are worried that a computer may someday "solve" chess and was the launching point for the discussion. My personal opinion is that a game played perfectly by both players would result in nothing left on the board but the two Kings, but that's probably more a philosophical premise than a mathematical one.
*

*
Addressing your other points:
*

*
1) I agree with you (in fact, I mentioned this in a recent column I wrote for another website). However, I chose to phrase the problem in the manner I did in order to keep the basic multiplication as simple as possible (I'm a history guy, not a math guy).
2) If I'm understanding you correctly, this is actually a pretty good definition of planning (as opposed to short-term tactics). The problem is similar to one addressed in the discussion on "dumbing down" a chess program: while it's easy for humans to learn and understand these axioms or heuristics, it's tough to quantify them in terms that a computer program can understand.
3) There's always that possibility, definitely.
4) There's almost certainly such a game someplace. Tic-Tac-Toe has been solved, of course, but there's no shortage of strategy games (many of them commercial products) with more substance than that. I'm almost certain that a (flawed) strategy game exists that's been solved (but I'm too tied up right now to do the research required, even if it's just Googling for it). You've certainly presented an interesting idea. Any takers out there?
*

*
By the way, Charlottesville is a beautiful area. I spent some time there last summer on one of my history jaunts. You're blessed to be living there. -- SL*

----------

As I read your article about countless number of variation in chess and that spreading it over many computers still doesn't work. But why not try something that IBM has been doing, The World Community Grid which is in turn a large super computer but spread across the world used to help with the Human Proteome, smallpox research and help develop candidate drugs that can fight HIV/AIDS. If a grid like this were also split up for chess lets say and work would be split up among millions of computers. It would bring us many steps closer.

Mihai Gheorghe

*And they're doing some danged fine work, too. So much so, in fact, that I question whether using distributed computing to solve something as trivial (in the grand sphere of things) as the game of chess is worth the computing time expended when there are so many more important problems to be addressed. I don't recall whether or not that actually made it into my previous articles, but I certainly meant for it to be in there. Thanks for writing! -- SL*

----------

You are probably correct in saying that we can never find the "ultimate truth" of chess with the help of computers.

I would however not say 100% that you are correct!

The reason for this is that we cannot predict the development of computing technology.

Current trends seems to indicate that the PC’s speed (GHz) will not increase drastically in the future, even if methods exists to further speed up computations (dual core etc.)

Research in optical computing might give internal communications in a PC speeds close to the speed of light, but even then it would seem that then computation would take too long time.

If quantum computers become available some time, who will know, but until then we’ll use *Fritz* with its "limitations". It beats me anyway!

Knut Johnsen

Norway

----------

In your last column titled "Eco and distributed computing", you give rough estimates for the total number of possible board positions after a few moves. This is just to let you know that exact numbers have been computed in some cases: for example, a lot of information can be found on François Labelle's page "Chess Problems by Computer".

For example, after 4 moves by White and 4 moves by Black, we have 25.600.000.000 positions with your exponential estimation but only 988.187.354 distinct positions that can be actually attained in practice, a still respectable but somewhat smaller number. The difference with your estimation is essentially due to the fact that different sequences of moves can lead to the same position. This quantity seems to be multiplied by 10 for every additional move, but it has only been computed until 9 plies, for which it is equal to 9.183.421.888 positions.

On the other hand, the total number of games (i.e. possible ways to play all 8 moves, including those leading to the same positions by transpositions) is 84.998.978.956 games, more than 3 times your number.

The difference here with your estimation is due to the fact that there are often more than 20 possible moves for a given position.

I hope that you will find this interesting,

François Glineur

*Thank you, sir! That's more than merely "interesting". I tried to double-check the math and it was pretty fun, right up to the point that smoke started rolling out of my calculator.
*

*
The transpositional nature of the resulting positions can be easily checked with something like Fritz Powerbook; you'll often encounter multiple ways to get to Position B from Position A. When I've discussed transpositions with schoolkids, I always liked to use the simple example of 1.Nf3 Nc6 2.Nc3 Nf6 and 1.Nc3 Nc6 2.Nf3 Nf6 reaching the same identical position.
*

*
Here again, the reason I "shot low" by giving an example with twenty possible moves was to keep the math fairly simple. In a typical middlegame position, the moving side usually has more than twenty legal moves, and it's highly unlikely (I'd say "impossible", but someone would likely correct me) that a position exists in which a side has twenty legal moves, each of which has exactly twenty responses, with twenty responses to each of those, ad infinitum. But the hypothetical example served its purpose: to illustrate the exponential nature of the numbers involved. -- SL*

----------

personally i get the impression you are a bit pessimistic about the possibilities of a strong computer engine; first having used Shredder 9, and intending to switch to the new top-engine Rybka, it's my impression that such engines on a fast computer can evaluate positions just as well, or even better than a GM; and at least faster..

this means that doing research about opening variations certainly is possible, and will give better results than the simple statistics with which the Chessbase opening trees seem to work....

then about 'solving chess' in such a way, well indeed we need to connect the results of any opening research via an engine to endgame table bases to see if chess could be solved; personally i suspect it's a draw, but with some minor extensions of the rules (eg. removing the 50 move draw rule), chess *might* be solved in future.

you might be interested in the chapter 'solving chess, a starting point' of my new book, which you can read on my site.

...

and then a few comments about your calculations about the nr of possibilities when evaluating positions:

have you never heard of the alfa/beta pruning technique ? it's 100 % mathematically correct, and is reducing the nr of positions which an engine has to analyze with about 95 %

secondly, the nr of positions with equal material is much more less than the nr of unbalanced positions, and thirdly, there are a lot of transpositions.

when you take all these factors into account, chess is a finite game, and might be solved in future.

yes, by computers, not by humans.

Hans Berliner thought chess could be solved, but was wrong that 1.d4 is the best move. it's 1.e4 ..

jefk/kec

*Thanks for your e-mails (which I've combined into a single entry). There's a lot to comment on here...
*

*
I never doubted the speed of computers. The issue with a single user analyzing all of the variations in ECO out to the nth degree (which is, of course, the query which prompted the whole discussion) is one of scale -- there's just no way one guy is gonna do it all with a commercial chess program on standard operating hardware, no matter how fast the engine or processor. It would be a single user's "life's work" to analyze and extend every ECO variation to, say, another ten moves. And, as I demonstrated in that article, it's a mammoth task even if the work is divided among several users/machines. I seem to remember an Internet group trying to do this within the last decade; I'm not sure what the final result was -- but since I haven't heard anything more about it in a long while, I doubt that the result was terribly earthshattering.
*

*
There's a lot of ground between the opening and endgame. Current tablebases are six pieces maximum (and even all of the six piece possibilities aren't available yet); I don't see any practical way (presently) to store tablebases which consist of much more material. And there's all that middlegame (between the opening and ending) to consider; I'm not sure how a "shortcut" to link the opening and endgame could be created.
*

*
I'm very familiar with alpha/beta pruning, but (as I discussed in that prior column) it's not an option here -- all moves and possibilities must be considered or the whole exercise becomes moot. And while chess is definitely finite, the possible variations are so numerous that I don't think any single one will be found to be a "forcing" line; it's not as simple as a forced "mate in x".
*

*
I don't think there's a best opening move. I prefer something between moves two and six that makes my opponent squirm uncomfortably in his chair and maybe utter an expletive or three. -- SL*

----------

I've been following your column off and on for several years now, and I have to congratulate you on a fine job; you are, as always, entertaining to read. However, I was motivated to write in for the first time when I read your most recent Chessbase Workshop, wherein you describe the problems in "solving" chess. The problems with this are well known, and you touch on them in your article, but I'd like to mention that were mathematicians to attempt to solve chess they would be using a slightly different technique.

For a game of perfect information, instead of looking -forward- in the game tree, we would start by completely specifying the game tree and then using backward induction*. In a game like this, backward induction guarantees a solution of some sort; the only reason that this has not been done yet for chess is, as you showed in your article, the game tree for chess is unimaginably large. However, were it possible to fully specify the tree it would be a (relatively, of course) trivial matter to work back through the tree and find the best solution.

This would eliminate the problems you point out of computers playing imperfect chess and of the horizon effect. Of course, it is still practically impossible to construct a full game tree for chess with current technology, which is why chess engines don't use backward induction to choose their moves, but there you have it.

*In case you are not familiar with backward introduction, here's a quick and dirty intro:

Let's examine a game tree (see attached image from Wikipedia) for a game called 'Ultimatum'. This is a game of perfect information; player one chooses a proposal ("*F*air", or "*U*ltimatum" in the game tree), and player two responds by "*A*ccepting" or "*R*ejecting" that proposal. Payoffs are as labelled as (First player, Second Player), so in the left most terminal node, player one would get 5 and player two would get 5 as well.

To solve the game, we work backward from the bottom. At the left-most branch of the tree, player two has the choice of A, which gives her a payoff of 5, and R, which gives a payoff of 0. So were player two to reach this branch, she would choose A (were she rational, which we can safely assume here and in chess as well). At the right-most branch of the tree, player two would have the choice of A, which gives a payoff of 2, and R, which gives a payoff of 0. Again, player two would choose A.

Now, we can effectively re-write the game tree, and replace both of player two's branches with what she would choose at the termination of each branch. Now, player one would have the choice of F, which would give a payoff of 5, and U, which would give a payoff of 8.

Given this, we can see how the game is going to go: player one will choose U and player two will choose A. Notice that player two ends up accepting the Ultimatum even though she would prefer to accept a Fair proposal; player one can get away with this because he knows what player two will choose at each branch. (A caveat - in experimental work on this, humans often have a tendency to irrationally choose to reject an ultimatum even though they then end up with a lower payoff. It appears that we have don't like unfair deals, but for the purposes of chess this is less important).

Solving chess would proceed in the same way (without moral implications of ultimatums, of course.) We would construct a full game tree, and then find the moves giving the best payoff to the player whose turn it is to move. At the bottom of the tree, this would consist of "White mates", "Black mates", or "Draw", for either White or Black to move. I'm sure that you can imagine working your way back through the tree by collapsing the choices at each branch; eventually, we get to the top node, where White has the initial move. Each of White's initial moves would then have a concrete payoff with it: Win, Lose, or Draw. A single move that has a conclusion of "Win" would mean that White is better from move one.

Steven Hamblin

Master of Arts student

Department of Psychology, University of Alberta

*Dang! Thank you for your e-mail!
*

*
And to show how shallow and unintelligent I am, this greatly reminded me of the scene from the movie A Beautiful Mind in which the guys are trying to pick up girls in a bar. They all want to go after the hottest babe until John Nash (played by Russell Crowe) tells them that it makes better mathematical sense for one of them to go after the hottest one while the rest go after her girlfriends -- that way each guy has a chance of, um, "earning a reward", as opposed to one guy getting the grand prize while the rest get bupkis. (In my opinion Nash should have got the Nobel Prize on the spot). See? Game theory does have everyday applications.
*

*
Steven, I'm sorry -- I don't mean to trivialize your e-mail. It's remarkably informative and I thank you for sending it. -- SL*

----------

I think the question is a hypothetical one.Just because computers can't solve it does not mean there is no solution to the question.My hunch is that with the current rules of chess you would find that with best play the game would be a tie since there are many ways to have a large disadvantage but still be able to draw.

Since there are endgame tablebases that play perfect chess. I was wondering if it would be possable to have middlegame tablebases, surely there are a limited number of combinations and each combo has a certain pattern to the pieces( it doesn't matter in a kingside attack if there is an extra pawn or knight on the queenside somewhere) then the chess programmers could build up a huge database of all combination positions.So when the program comes to a position that closely resembles one in it's database it could first check out if the same combination works.This is how humans play and if this could be incorporated into a program you would have a truly remarkable program.

In the early days of chess programming they had to calculate every position from fresh.I use to laugh to myself when you could set up a position and let the program solve it,then just make some minor change on the other side of the board and it would have to start from fresh again.But in the early days there was no option as computers had very limited memory resources.But today computers have astronomical gigabytes as their disposal and have shown to improve play with endgame databases.So really my question is are middlegame databases possible?

Chris Burns

*Thanks for the e-mail Chris!
*

*
Middlegame databases are theoretically possible, but the time required to generate them and the disk space required to store them make the idea presently unworkable. It's another exponential problem. Here's an example. The Fritz5.32 CD contained the three- and four-piece endings, which fit comfortably on the CD along with the program files. The original Fritz Turbo Endgame shipped on four CDs and contained the three- and four-piece endings as well as a subset of the complete set of five-piece tablebases (another vendor offered the complete five-piece set, which required ten CDs). Our current Fritz Turbo Endgame set contains all of the three-, four-, and five-piece endings as well as a subset of the possible six-piecers, and that requires five DVDs (not CDs) for storage.
*

*
As for the database idea in which a computer program looks for identical or similar positions, that was a feature of IBM's Deep Thought chess program but it didn't pay off especially well; as far as I know, that feature was not part of the later Deep Blue program. -- SL*

----------

As a mathematician, I was interested in your article on using computers to solve chess. However, it seems to me that starting from the initial position to analyze chess is not a very smart way of doing it if we want to solve chess completely. Instead, it makes sense to start from endgame positions and work backwards, checking which moves can lead to which positions. In fact, I believe that tablebases have been created up to six pieces (or perhaps more now). The point is that this essentially turns the multiplicative problem into an additive problem -- at each node we need only consider one option for each possibility, and then we can finish. So we start with all the positions with only two pieces left -- well, that's not very interesting, so we move onto the positions with three pieces left. We then document a bunch of possibilities:

1) if it's a win for the player to move, the best move (or a best move) is one that mates in the fewest moves

2) if it's a draw, the best move (or a best move) is one that allows the opponent the largest number of moves that lose

3) if it's a loss, the best move (or a best move) is one that allows the opponent the largest number of moves that do not win.

(Perhaps we can modify the second and third possibilities somewhat, but that's not so important right now.) Using this sort of procedure, we would need to consider only one line for each move since we will hopefully have evaluated that position already. There are, of course, some difficulties even here -- there are no global monovariants in chess since we may reach the same position multiple times. But it's a start.

Another question is whether it can be realized as a distributed project. I think it can, but not as well as some of the others. There are tons of positions that cannot lead to other positions. Suppose we give all the participants six-piece tablebases so that they can automatically evaluate six-piece positions. Can they solve the seven-piece tablebase problem? Sure -- we can't capture anything, we can only increase the number of pieces of either color by promoting pawns. In particular, there is no way of going from a position in which white has two pawns and black none to one where each side has one pawn, for example. So such positions can be given to different participants. Similarly, pawns cannot change files. They can only promote. It's too late at night for me to try to come up with a reasonable bound for the amount of time it would take to solve the seven-piece tablebase problem, but if we had 1000 computers working on it, I doubt it would take over a year. That seems like a really high estimate to me, in fact.

Simon Rubinstein-Salzedo

*I'll have to leave this one without comment. There are three rules I try to live by:
*

*
1) Never lie to your lawyer
2) Never shoot pool with a guy named "Fast Eddie"
3) Never argue numbers with a mathematician
*

*
Thank you for your e-mail sir! -- SL*

----------

I have read your column about using distributed computing to "solve" chess at Chessbase and could not agree more. Apart from the computing time issue, have you ever thought about the amount of memory that would be needed? As my brother once pointed out to me: even if you could somehow store one move in just one atom, would the number of atoms in the whole universe be enough to store the solution?

Alberto Adan

Spain

*Thanks Alberto! I, too, have heard that same storage argument. Someone once "refuted" it by saying that you don't have to store everything; you'd just delete the variations that don't work out. But that runs smack into the "time" issue again.
*

*
Space and time. I'm beginning to think it'll take an Einstein to sort this out. -- SL*

----------

In your interesting article, you seem to have missed a crucial factor while addressing the question of ECO solvability.

Now, Fritz 8 on your machine is capable of analysing 'only' 250,000 moves per second, whereas some really powerful machines, like Deep Blue or Hydra (200 million), say, can calculate much faster. Also, these are not even supercomputers.

Now, the already running distributed computing projects like SETI@Home or Folding@Home are running on high end clusters or supercomputer servers, and being distributed by them.

The reason I am saying all this (apart from me being a computer engineer) is that, nobody, including Bill Gates, was ever able to predicate on how powerful computers have become today, even 15 years back. Supercomputers have also grown by quantum leaps. If enough supercomputing power, combined with much more enhanced desktop computers are available, then, who knows, ubiquitous distributed computing may solve Eco lines within your lifetime, rather than the 600,000+ years that you have calculated in your article.

Shivakumar Sundaram

*All I can do is fall back on the "mea culpa" defense; my fault for mixing two related topics in one column. You're absolutely correct in your assessment regarding faster machines, but the original e-mail was from a fellow wanting to analyze all of ECO on his home machine using Fritz. But your point is correct and well taken, and I thank you for writing to me. -- SL*

----------

I wholeheartedly agree, especially about the pointlessness of the exercise. Only point I would make is that in order to "solve" chess properly I don’t imagine you’d want to use a chess engine in the normal sense at all, because of the problems of selective search. What you’d need to be absolutely sure is a 32-piece tablebase. I’ve no idea how many positions per second a tablebase generator can crank through -- still not nearly enough for the project to be feasible on today’s machines. But in 50 years, who knows? Or cares?

Billy

*Well, you cared enough to write -- heh. (And thanks for doing so). As discussed above, 32-piece tablebases are way outside the range of current capabilities, and unless somebody comes up with a really, really killer compression technique, I sincerely doubt they're feasible at all. -- SL*

----------

re your article on the "solvability of chess": It seems to me that your argument is (unfortunately) flawed. I do find your comments about the time it would take to calculate just one ECO-line rather hilarious though . However, the problems with such arguments (and similar ones exist in the philosophy of mind, where the heavy battery of physical limitation is usually called upon when that old brain-in-the-vat argument comes up) is, that most of the time (and I fear, this time as well) an "in-principle" thought experiment ("what if we had computers that...") is tried to be refuted be de-facto calculations. But that's simply a categorial error ("type mismatch", it's not the way it works! Here's why: Fully stated, that "in-principle" argument would go something like this: Let's assume that some amazing advances in physics make it possible to create a computer that calculates at a speed that is near or "at" (whatever that means) infinity After all, that's at least conceivable. No matter how much money or energy it would take to build such a monster - we could imagine such a thing to exist. That's all that matters for the argument to work (Let' say, we combine Hydra with supraconductivity or something like that). All those billions of years then shrink together to something that's "overseeable". And even if it's not - just in principle, you _could_ wait billions of years - if the result is that you know whether 1. e4 wins or not - maybe that's worth the waiting.

And here comes the second part of your argument - namely, that engines evaluate different moves differently and that therefore even an eventual statement like "1.e4 wins" by "SuperDuperFritz" does not necessarily mean that "SuperDuperShredder" would come to the same conclusion. But I don't see how this can be - after all, for the sake of arguement we are talking brute force here, and that means that "SuperDuperFritz didn't treeprune like hell but simply calculated all those billions and billions of positions and found, that there is one which white can force a mate. And there is no way that "SuperDuperShredder" can come to a different conclusion. Evaluation is something that chess programs do only because they can't calculate to the bitter end. If they could, the respective program would be rather simple: just give them the chess rules, and let them chew the numbers. Example: One of the lines would be: 1.e4 a5 2. d4 a4 3. Nf3 a3 4. Nxa3 Rxa3 5. bxa3 and I would assume that there's a forced win in this one with perfect play. Meaning, that whatever black moves on the next and any subsequent move, there is at least one move that keeps the advantage. And the difference between Fritz9 and SuperDuperFritz is simply that Fritz9 must assess that advantage and select moves that increase it since he has limited time and calculation power whereas time is nothing to SuperDuperFritz and every position gets tried anyway. He'll find the positions that keep the advantage by simply trying them all - and in SuperDuperFritz's vocabulary "keeping the advantage" means simply "eventually 1-0"

Does that make sense?

Albrecht von der Lieth

*Yes, pretty much. But I don't view "keeping an advantage" as being the same thing as a "forced win". In the example you give, for instance, I would argue that 1...a5 leading to a White win is hardly "forced play" since Black has many better alternatives for his first move (never mind the rest of the Black moves in the sequence you give).
*

*
The opening is really the nub of the problem. Going back to your example, a "forced win" would be defined as a White win after 1.e4 (and the assumption here for the sake of the argument is that 1.e4 is the best White opening move, jefk and Fischer notwithstanding) no matter what Black plays in reply. It doesn't matter whether Black plays a symmetrical opening with 1...e5, or the Sicilian, or the Alekhine, or the Caro-Kann, etc. -- there would have to be a forced win possible right from the get-go. Not just a move sequence that keeps a 0.03 pawn advantage, but one that leads inevitably to mate right from the first move. And I just plain don't think that such a thing exists.
*

*
And, I maintain that one engine's "advantage" might not be seen as such by a different engine. So it's not just a question of whether a forced win exists -- it also becomes a question of provability and verification. -- SL*

----------

*Then Albrecht cracked me up with a followup message:*

... to make a long story short and hopefully more convincing:

SuperDuperFritz, just as SuperDuperShredder would amount to nothing more or less than tablebases for 32 pieces.

*Oh, yeah -- that 32-piecer again...
*

*
Thanks for writing! -- SL*

----------

I read your article on Solving Chess. I have a degree in mathematics, and am a longtime computer programmer, so I follow your arguments, and in the most part, agree with them. But I do have a few comments.

You basically cover two variations: solving chess with and without an evaluation function. Trying to solve it without such a function would be similar to solving tic-tac-toe. List all possible variations, without regard for whether they're "good" or not, and judge solely based on whether the end node is a win or not. Then work your way back up the tree to see if there is a forced set of moves to get to a winning node. Tic-tac-toe can be done this way, and I've even read that checkers is close to be solved. Your argument that chess has far too many variations to be solved this way is persuasive. But I might suggest waiting until computer science evolves a bit more before being so final. Quantum computing, which allows massively parallel processing, might make a difference. There also might be a way to "prune" the search tree in such a way that the search could be sped up without sacrificing any information. I believe that the checkers algorithm had some such technique.

The second variation involves using an evaluation function, and you state that there are many such functions, none perfect. While true, I don't quite see that finding such a "perfect" evaluation function, which does not need to look at ensuing positions, is impossible. There are some mathematical games, such as Nim, which can be evaluated and an "ideal" move chosen for any given position. I doubt that such a function exists for chess, but I don't think that it has ever been proven not to exist. (I have fantasies of finding such a function, then walking into the next Open and cleaning everyone's clock.)

So, while agreeing with you in the main, I still find a tiny possibility that chess may be "solved" someday. Whether it is or not doesn't really matter. Computers can now beat practically anyone, and people still play each other. The same would still be true if computers got to where they can beat anyone, anytime.

Eric M. Weeks

----------

I tend to agree with your conclusion and have another argument (not mine) which you may (or may not) find of interest:

In one of my EE classes at Duke Univ. (1986), the professor concluded the semester with a lecture in which he claimed (I didn’t verify, but took his word on it) that it had been shown both quantum mechanically and thermodynamically that there was a maximum rate at which matter (living or artificial) could process information per unit mass per unit time. He claimed that both approaches produced the same ceiling with units of bits/gram/second. (I do not recall the exact number).

He then cited that it had been stipulated that the number of possible sequences in chess was around 10 ^ 120. In technical terms this is a hell of a lot. This too I didn’t verify (I have no idea how you could actually figure this out).

Using the maximum rate of processing and assuming a computer with the mass [??? Grams] and age [4.5 * 10 ^ 9 = 4,500,000,000 years] of the earth and capable of this maximum processing rate, it would have processed around 10 ^ 30 bits (not even sequences in chess) in its lifetime. Note that while I don’t recall the maximum processing rate, I somehow do recall the approximate value of rate * mass * time). If I recall my math correctly this means that this imaginary computer would need to churn for another 10 ^ 90 times 4.5 billion years to have finished processing 10 ^ 120 bits (not sequences) [i.e. 10 ^ 120 = (10 ^ 30) * (10 ^ 90)].

I regret that I cannot cite the references which establish this supposed maximum processing rate, however, if it is indeed true (and not a bunch of academic B.S. [not uncommon]) and the claim about the number of sequences in chess is accurate, then it is simple to conclude that no amount of distributed computing on our planet could possibly solve the game of chess in the foreseeable future.

I hope that this has been of value and/or interest to you and that I haven’t just wasted your time,

Cameron Monemi

*No waste of time at all -- thanks for submitting it! That's interesting and it's sure to spark another lively mathematical debate. I'd love to play, too, but I fried my calculator about ten or so e-mails back and it's after midnight -- all the stores are closed, so I can't run out to buy a replacement.
*

*
I loved the line about "academic B.S." by the way. I'll refrain from further comment; I rather suspect that this column already has me soaking in enough hot water. -- SL*

----------

Well, that's it for the mail on this topic. I'd composed a half-decent ending for this column, but I forgot it because I was laughing so hard at Cameron's "academic B.S." comment. Oh well...such is life.

Until next week, have fun!

You can e-mail me with your comments on *ChessBase Workshop*. All responses will be read, and sending an e-mail to this address grants us permission to use it in a future column. No tech support questions, please.