A chess problem holds the key to human consciousness?

by Frederic Friedel
3/16/2017 – That, in fact, is what the newly founded Penrose Institute is suggesting. The founder, famous mathematician Sir Roger Penrose, has composed a problem devised "to defeat an artificially intelligent (AI) computer but be solvable for humans". The latter are asked to submit their solutions and share their reasoning. But the position itself and the logic behind the experiment is not compelling. Still, you may enjoy checking it with your chess engine.

The story was broken by Sarah Knapton, Science Editor of The Telegraph, who put it up in her newspaper. In it she reported on the launch of the new Penrose Institute, founded by mathematics professor Sir Roger Penrose, who gained world-wide renown in 1988 by working out black hole singularities together with Stephen Hawking (the two received the Wolf Prize in Physics for that). I was unable to find the original chess artical on the Penrose Institute site, but Sarah Knapton quotes extensively from it:

The chess problem – originally drawn by Sir Roger – has been devised to defeat an artificially intelligent (AI) computer but be solvable for humans. The Penrose Institute scientists are inviting readers to work out how White can win, or force a stalemate, and then share their reasoning. The team then hopes to scan the brains of people with the quickest times, or interesting Eureka moments, to see if the genesis of human ‘insight’ or ‘intuition’ can be spotted in mind.

Can you solve the puzzle?

Scientists from the Penrose Institute want to hear from you if you've cracked it. They write:

The puzzle above may seem hopeless for White, with just a king and four pawns remaining, but it is possible to draw and even win. Scientists have constructed it in a way to confound a chess computer, which would normally consider that it is a win for Black. However an average chess-playing human should be able to see that a draw is possible.

A chess computer struggles because it looks like an impossible position, even though it is perfectly legal. The three bishops forces the computer to perform a massive search of possible positions that will rapidly expand to something that exceeds all the computational power on planet earth.

Humans attempting the problem are advised to find some peace and quiet and notice how the solution arises. Was there a flash of insight? Did you need to leave the puzzle for a while and come back to it? The main goal is to force a draw, although it is even possible to trick black into a blunder that might allow white to win.

The first person who can demonstrate the solution legally will receive a bonus prize. Both humans, computers and even quantum computers are invited to play the game and solutions should be emailed to puzzles@penroseinstitute.com.

Read the full Telegraph article here

The Telegraph report was picked up by a number of media outlets, like this one (in Mashable). There Lance Ulanoff writes:

It’s hard to imagine how the game got here—it's even harder to imagine what happens next, let alone a scenario in which four white pawns and a white king could play to a draw, or even win this game. Yet: scientists at the newly-formed Penrose Institute say it’s not only possible, but that human players see the solution almost instantly, while chess computers consistently fail to find the right move.

“We plugged it into Fritz, the standard practice computer for chess players, which did three-quarters of a billion calculations, 20 moves ahead," explained James Tagg Co-Founder and Director of the Penrose Institute, which was founded this week to understand human consciousness through physics. "It says that one side or the other wins. But," Tagg continued, "the answer that it gives is wrong."

True. Above is the calculation displayed by the oldest engine I have installed on my notebook. Fritz 13 scores the position as 31.72 pawns ahead for Black. On ChessBase India Sagar Shah checked it out with Houdini 5.01 Pro 64 bit, down to 34 ply in a four-line search. Result: 24.91 pawns ahead for Black.

It is true that chess engines will display high scores in favour of Black, due to the material advantage of a queen, two rooks, three bishops and a pawn. What they are saying is that Black has a huge material advantage, one that should result in a win (–+). And they will keep moving their bishops, displaying a high positive evaluation right until the 50-move rule approaches and they see there is no possibility of forcing a pawn move by White. Maybe some of our readers can play out the position and tell us when top engines see the futility of continuing to move and display an eval = 0.00.

Interestingly, when I remove two black bishops my ancient Fritz 13 sees the draw in mere seconds. If I remove just one bishop it does not come up with a 0.00 evaluation in a reasonable amount of time.

But now we come to the humans, who can indeed work things out in a flash: the position is extremely contrived, and so the first thing you do is work out that Black has no legal moves except with his bishops. All White needs to do to defend the position is not to capture a black rook and not move the c6-pawn. He simply moves his king around, mainly on the white squares, and lets Black make pointless bishop moves. Absolutely nothing can go wrong. Once again we ask the owner of very old chess engines to check whether any of them will capture a rook, in order to reduce the material disadvantage slightly – but in the process lose the game.

On the other hand the contention that "it is even possible to trick Black into a blunder that might allow White to win" seems extremely far-fetched. Black would need to move his bishops out of the way, while White advances his king to protect the c-pawn, which then promotes (e.g. 1.Kf3 Be1 2.Ke4 Bc1 3.Kd5 Ba1 4.Ke6 Bec3 5.c7 Kb7 6.Kd7 Bf4 7.c8=Q#), but that is not White tricking Black, it is some kind of pointless helpmate.

Anyway, it is trivially easy for White to hold the draw, and the Penrose Institute will probably receive hundreds of correct solutions submitted by average chess players. The scientists say they are interested in the thought process that lead people to the solution – a sudden moment of genius, or the result of days of consternation? "If we find out how humans differ from computers, then it could have profound sociological implications," Penrose told The Telegraph. Really?

There are much more elegant positions and more profound examples that show the difference between human and computer thinking. Back in March 1992 I published the following study in a computer magazine, as a challenge for any machine to get it right:

[Event "La Strategie / CSS 3/92-29"] [Site "?"] [Date "1912.??.??"] [Round "?"] [White "Rudolph, W."] [Black "White to play and draw"] [Result "1/2-1/2"] [SetUp "1"] [FEN "3B4/1r2p3/r2p1p2/bkp1P1p1/1p1P1PPp/p1P1K2P/PPB5/8 w - - 0 1"] [PlyCount "11"] [EventDate "1912.??.??"] 1. Ba4+ $1 Kxa4 (1... Kc4 2. Bb3+ Kb5 3. Ba4+ Kc4 $11) 2. b3+ Kb5 3. c4+ Kc6 4. d5+ Kd7 5. e6+ Kxd8 6. f5 1/2-1/2

You probably know that you can switch on an engine on our JavaScript board (and move pieces to analyse). You can maximize the replayer, auto-play, flip the board and even change the piece style in the bar below the board. At the bottom of the notation window on the right there are buttons for editing (delete, promote, cut lines, unannotate, undo, redo) save, play out the position against Fritz. Hovering the mouse over any button will show you its function.

Fritz & co. display an eight-pawn disadvantage for White. The correct first move is to sacrifice even more material, which is the only way to secure a draw. This is a much more relevant test, as chess engines, playing the white side, will actually select the wrong strategy and lose the game. In the Penrose position computers with "think that White is losing", but they will hold the draw without any problem (I say this without having tested older engines and trying to entice them into capturing a rook and losing the game).

This little recreational pastime of taking the mickey out of chess playing computers has a long history, which will be told at a later stage. I must admit: it is getting harder and harder as these thing get stronger and stronger.


Topics chess problem

Editor-in-Chief of the ChessBase News Page. Studied Philosophy and Linguistics at the University of Hamburg and Oxford, graduating with a thesis on speech act theory and moral language. He started a university career but switched to science journalism, producing documentaries for German TV. In 1986 he co-founded ChessBase.
Feedback and mail to our news service Please use this account if you want to contribute to or comment on our news page service



Discuss

Rules for reader comments

 
 

Not registered yet? Register

Tom Box Tom Box 3/16/2017 09:08
For the Rudolph problem, Stockfish does not follow the line given above but recommends 1..b3. One of the simplest positions that shows a chess engine does not 'understand' but only calculates is a board divided by a wall of pawns of both colour with a king on one side and king and queen on the other. A child immediately sees that the separation is total and the 'superior' side's extra queen is meaningless, while the engine sees the position as winning for the side with the queen.
deepestgreen deepestgreen 3/16/2017 10:08
presumably those that created this problem aren't really chess players. its so obvious, there isn't any thought process required.
benedictralph benedictralph 3/16/2017 10:16
A computer could easily be programmed to solve *this type* of problem, if need be. Just like it can be programmed to do any number of other specific things. The problem "proves" nothing except that most chess programs are not coded in the right way to solve it. Indeed they are coded to play strong chess and this usually comes at the expense of dealing with "exotic" positions that are virtually impossible to occur in a real game.
sagitta sagitta 3/16/2017 12:21
This is shockingly stupid for a maths professor - I would guess that anyone rated above 1000 would see that it is a draw in a second or two
PEB216 PEB216 3/16/2017 02:41
A fascinating problem if we accept the challenge posed in the third paragraph of this article: ". . . workout how White can win, or force a stalemate." I couldn't see a win for White (not strictly true), but I did see the possibility for a "stalemate," although, unfortunately, it doesn't work. Here is my idea: Place the White King on c8 and the White Pawn on c7. With the threat Kb8 (with the idea Ka8 followed by c8(Q) and mate). If, after Kb8, Black responds with Bxc7+, then Ka8. Now White threatens a "stalemate" (if Black cooperates!) as follows: b3xa4, Qxa4; c4xb5+, Qxb5?? (of course, this is a blunder; the right move is Kxb5) stalemate. If Black plays Bxc7 (when the White King is still on c8), then play continues as previously: b3xa4, Qxa4; c4xb5+ Qxb5?? (Kxb5 is the correct move) stalemate.
Frederic Frederic 3/16/2017 02:44
@benedictralph: you nailed a point that I could have made: chess playing computers are not programmed to deal with such problems since the corresponding positions NEVER occur -- in normal games, as opposed to very abstruse artificially constructed positions, that are presented for pure entertainment. To get a program to consider completely locked positions would indeed be quite easy, but in the end it would probably not add a point of playing strength in regular games. Well, maybe a few, but they would be cancelled by the time spent checking every position for blockages.
pmousavi pmousavi 3/16/2017 05:02
"You insist that there is something a machine cannot do. If you will tell me precisely what it is that a
machine cannot do, then I can always make a machine which will do just that!" Jon Von Neumann
cythlord cythlord 3/16/2017 05:05
The proof of a legal position is trivial: black needs six captures and white has no pieces left. By all means it is a stupid problem. Here's a sample game: [Event "?"]
[Site "?"]
[Date "????.??.??"]
[Round "?"]
[White "?"]
[Black "?"]
[Result "*"]
[PlyCount "118"]

1. Nf3 b6 2. b3 c5 3. Ba3 d6 4. Bb4 cxb4 5. Na3 bxa3 6. c4 g5 7. Qc2 g4 8. Qf5
g3 9. Qc5 dxc5 10. Nd4 Nc6 11. Nc2 Ne5 12. Nb4 cxb4 13. d4 Nf3+ 14. gxf3 g2 15.
Kd2 g1=B 16. h4 Bb7 17. Rh3 Nf6 18. Rg3 Ng8 19. Rg6 Nh6 20. Rd6 exd6 21. Rc1
Ng8 22. Rc3 Qe7 23. Re3 Kd8 24. Re5 Bc6 25. Rd5 Kc8 26. Re5 Kb7 27. Rd5 Ka6 28.
Re5 h5 29. Re4 Rh6 30. Re3 Rg6 31. Kd1 Rg5 32. Kd2 Ra5 33. Kd1 Ra4 34. Kd2 Rc8
35. Kd1 Bb7 36. Kd2 Qe5 37. Kd1 Qa5 38. Kd2 Rc5 39. Kd1 Rb5 40. Re5 f5 41. Rc5
dxc5 42. d5 Bc6 43. dxc6 Nf6 44. e4 Nxe4 45. fxe4 fxe4 46. f3 exf3 47. Be2
fxe2+ 48. Kc2 e1=B 49. Kd3 Bxh4 50. Ke4 Bg3 51. Kf5 Bh4 52. Kg6 Bg3 53. Kxh5
Bh4 54. Kg4 Bg3 55. Kf3 Bf4 56. Ke2 Bgh2 57. Kd1 B8d6 58. Ke1 Bhg3+ 59. Ke2
Bde5 *

I struggle with even the most basic of retro problems, but this one was so easy I'm not sure it even qualifies as a retro problem.
saivishwesh saivishwesh 3/16/2017 05:17
i don't think white can ever force a stalemate let alone a checkmate...how will white force a stalemate if black places one of his bishop on c7 and keeps moving the g3 bishop between h2 and g3...??
delax001 delax001 3/16/2017 06:05
Computers do not program themselves, not so far. The above examples only prove human-laziness from the engine programmers in including such criteria into the evaluation function. They are extremely corner cases, with almost no practical use. In no way or form these cases put into question that the search-evaluate process that computers use, at always growing performance rates, has widely surpassed the threshold where they are able to defeat the the human chess-playing approach, based mostly on pattern recognition, reasoning and limited "calculation".
vgn2 vgn2 3/16/2017 06:40
Very surprised at the depth of the solution from such a great mathematician/physicist/writer!
Should have got it reviewed by some strong players before publishing...
jajalamapratapri jajalamapratapri 3/16/2017 06:44
Stockfish "solves" this easily in that it plays king moves until the 50 moves rule gets the draw. I'm sure any other engine will do the same.
jajalamapratapri jajalamapratapri 3/16/2017 06:46
PS I'm sure I've seen similar fun positions when I was a kid half a century ago, or made them myself (any idiot can do that) it is not "invented" by Penrose.
vladmirsicca vladmirsicca 3/16/2017 06:57
How does a engine behave if the white king starts at b1 instead? Does it suicide in order to decrease white's material disadvantage drastically?
amandas amandas 3/16/2017 09:10
the black can only move the bishops, so the King Will do 50 moves and the position not gonna change. will be draw cause of 50 moves
ewenardus ewenardus 3/16/2017 09:44
Isn´t there a mistake in the instructions? I thought stalemate is when your king can't move without entering check - this is a draw beause of the 50 move draw rule and not stalemate. It should be white ti win or draw, not win or get stalemated.
fgkdjlkag fgkdjlkag 3/16/2017 11:04
I think the point is that while the computer could be programmed to be able to solve this kind of the position, what they are testing is that even though the computer can look ahead hundreds of millions of moves, it does not realize the position is drawn. It is more the conclusion with this horizon and not the specific ability of the computer they are looking at.
moonsorrow55 moonsorrow55 3/16/2017 11:42
Super-easy for white to draw here, literally every move that isn't pawn to c6 draws.
moonsorrow55 moonsorrow55 3/16/2017 11:45
And if the solution somehow involves white WINNING instead of just a draw, black just puts the bishop on c7 and stops any tricks by white involving pushing the c pawn. Then white is left with nothing except king moves which are all drawn, or captures with his remaining pawns which appear to clearly lose in all lines.
Chvsanchez Chvsanchez 3/17/2017 12:34
The diagram, in fact, shows the end of the solution. The real problem has the white pawn on d5 and a black knight on c6, the black bishop on h2 and a white queen on g3. The solution is 1.dxc6! sacrificing the queen, after 1...Bxg3 a computer will believe it's winning but in fact it's a draw.
benedictralph benedictralph 3/17/2017 03:17
@fgkdjlkag

Of course the computer does not "realize" the position is drawn because it is not programmed to "realize" such an unlikely event. Why is there always this assumption that AI programs are or should be designed as some kind of "general purpose" machine? They are NOT. General purpose machines tend NOT to be very good at anything. Just like a jack of all trades. Analogously, if computers were conscious, they might laugh at human grandmasters simply being unable to see an "obvious" forced mate 15 moves ahead which they could see in 15 seconds. Would this "prove" that humans are stupid?
Toastmastergeneral Toastmastergeneral 3/17/2017 04:47
Last I checked, engines can't find Nigel Short's "king walk" against Jan Timman from 1991. My engines don't see Nigel's brilliant 31.Kh2!! which kicks off the king walk. Once you plug the move in, the engines see the win instantly.
benedictralph benedictralph 3/17/2017 05:06
@Toastmastergeneral

That's probably because the engine "pruned" that part of the move tree based on the heuristics that were programmed into it. The move in question probably seemed not worth considering based on said coded heuristics (until it was played). On the flip side, how many strong lines have human grandmasters missed that they only found out through the use of engines? A whole lot more, I suspect.
WildKid WildKid 3/17/2017 08:36
Several commenters have said that there is no real world advantage to programming computers to recognize these types of positions. I disagree.I think it's important to recognize in principle limiting characteristics of positions: for example, 'If Black manages to advance that pawn, he will have a fortress that White can never find a way through.' This is neither a positional judgment, nor a tactical one: it's an in principle long term property of the position, of the same type as the one the computers arguably miss above. Being able to recognize such positions and either go for, or avoid, them, would definitely make the computer a stronger player.
sjb sjb 3/17/2017 08:56
Interesting that the idea is found with some older programmes when a bishop or two is removed. How about with the converse - can we add more bishops (may take some moves away but add others) - via more promoted pawns etc
benedictralph benedictralph 3/17/2017 09:21
@WildKid

It's not so easy. Every new principle or heuristic that is added to a chess program will have to apply to literally millions of positions analyzed every second. The computer cannot afford to be "selective" about when to apply them or "trust its gut". So a point of diminishing returns is quickly reached. The position in this case is also EXTREMELY unlikely to occur in a real game. Possibly it will NEVER occur in a million years. Unlike human grandmasters, when a computer makes a single mistake it is thoroughly condemned as being totally useless or flawed. When a grandmaster misses something though, it is merely an "oversight" due to fatigue or stress or something like that. So why risk it?
JiraiyaSama JiraiyaSama 3/17/2017 09:49
To the people who are thinking Roger Penrose has no idea about chess: He has a brother by the name Jonathan Penrose, who is a Grandmaster. I think he proposed this position intentionally.
WildKid WildKid 3/17/2017 11:00
@benedictralph

The Penrose position is so artificial that it is unlikely to occur: however, the second position is of a general type that occurs quite often in 'fortress' endgames where neither party can cross a pawn barrier. The deciding game in the Womens' World Championship won by Tan Zhongyi had a little of this 'pawn fortress' flavor. There would definitely be a 'real-world' benefit to recognizing the type.
benedictralph benedictralph 3/17/2017 11:14
@WildKid

I have yet to see a real-world game position that a modern chess engine couldn't handle. Where, for example, it totally blundered and lost (like human grandmasters often do, to say nothing of the vast majority of average human players). Even in such cases, I suspect the problem could be remedied by fine-tuning some of its heuristics. You know, like in a future build of the same program released the next year? This is why composing some exotic position proves nothing against chess software, much less AI or artificial consciousness in general. Now, if a computer could "come up" with original ideas or things on its own... that would be a step in the right direction and worth investing in. But in chess, what's really left to discover that's worth investing millions and years of work in?
vishyvishy vishyvishy 3/17/2017 11:22
White moves king on all available white squares with almost speed of light... and black plays like drunk , slow like snail ... Black doesn't notice his clock is ticking... keeps playing slowly... due to this BLUNDER then before 50 moves blacks time runs out ...Black flags and white wins!... otherwise anyway if black is awake then white gets draw due to 50 move rule! :) So it is a win or draw situation for white ...proved legally!
delax001 delax001 3/17/2017 05:04
I think we are diverging in this thread. Discussion is moving in the direction on how useful/practical would be to include the evaluation criteria that would allow to recognize these position into a regular/commercial chess engines. And this can be argued at length. There are ROI arguments, as well as debate where we want to spend the millions of positions per second calculating capabilities.
But the initial statement still remains incorrect, IMHO: "a problem devised to defeat an artificially intelligent (AI) computer but be solvable for humans". There is no doubt that chess engines are capable to find the correct solution to these problems, if programmed accordingly.
Amtiskaw_ Amtiskaw_ 3/17/2017 05:49
Those complaining that these sorts of positions are unrealistic are entirely missing the point that Penrose and co. are making. It is the artificial, impossible nature of these positions that is the most important thing.

Their point is that neither a human nor a chess playing computer will ever have encountered such a position in a real game, yet a human with average chess ability can still work out the solution without difficulty. This suggests that their is some kind of imaginative analysis from first principles that a human is capable of performing, but machines, at present, cannot. Yes, you could adjust the programming of software to recognise particular strange situations, but this isn't the same as the software inherently having the imagination to work outside of its defined utility function.

This might all seem a little quixotic, but in fact it goes to the heart of current AI research, which is heavily based on the idea of "reinforcement learning". This involves training an AI by having it play a few million games and refining a neural network (e.g. how AlphaGo works). Yet these problems suggest a human chess player has a form of intelligence that is *not* based purely on reinforced learning, because they will not have encountered the problem before, yet can still solve it.
domnul_goe domnul_goe 3/17/2017 08:42
1. e4 c5 2. Nf3 Nc6 3. d4 d6 4. b3 Nf65. Bd2 h5 6. Bb4 cxb4 7. Na3 bxa38. c4 Qa5+ 9. Ke2 h4 10. d5 h311. dxc6 Rh5 12. Ne1 Rb5 13. Nd3Rb4 14. Qd2 Ra4 15. Nc5 dxc516. Qb4 cxb4 17. Rd1 hxg2 18. h4 Bh319. Rxh3 Rd8 20. h5 Rd5 21. h6 Kd822. Rh5 Kc7 23. f4 Rb5 24. Rd6 exd625. Rc5 dxc5 26. e5 g1=B 27. Bh3 Kb628. hxg7 Bxg7 29. Kd3 Ng4 30. Ke4f5+ 31. Kf3 Bh2 32. Bxg4 fxg4+33. Kf2 g3+ 34. Ke2 g2 35. Kd3 g1=B36. Kc2 Be3 37. Kd3 Bexf4 38. Ke2Bgxe5 39. Kd3 Ka6 40. Ke4 b6 41. Kd3Bhg3 ( draw or… ) 42. Ke4 Bf6 43. Kf5 B4g5 44. Ke6B3h4 45. Kd7 Be7 46. Kc8 Bef647. Kb8 Be7 48. c7 Bef6 49. c8=B#
Peterdes Peterdes 3/17/2017 10:21
Well, the deep problem of the article is the misinterpretation of the puzzle. The puzzle says that white has to find a way to "force black into stalemate or win". 50-move rule is _not_ a stalemate (even if it is a draw!), so the above "solution" is simply invalid.
benedictralph benedictralph 3/18/2017 01:23
@Amtiskaw_

"Their point is that neither a human nor a chess playing computer will ever have encountered such a position in a real game, yet a human with average chess ability can still work out the solution without difficulty."

The truth is, the vast majority of humans with or without chess playing ability (though they all have "intuition") would not be able to find the actual solution either. Secondly, why is there this assumption that the minority of good players who could indeed find the actual solution did so "easily"? Last I checked the brain is an extremely complicated black box which we have virtually no idea how it really works and arrives at decisions. What we do know, however, is that it is capable of making an inordinate number of mistakes (and often does). This is simply unacceptable with computers so the two should not be compared or the human brain put on some kind of pedestal. At the end of the day, computers still can beat pretty much 100% of human chess players under any time control "without difficultly", including the world's best. What does that say about "intelligence" now that we've conveniently moved the goalpost to "intuition" when it comes to AI?
cythlord cythlord 3/18/2017 03:23
@Toastmaster: Using the "chessbase cloud" analysis, we can see that an engine, in fact has found the Kh2!! move and actually calculated it to mate, so just because engines don't see it immediately doesn't mean they will never see it. Nowadays there are more and more tools for engines to solve these puzzles (although solving puzzles was never the intent in the first place), such as Houdini's Tactical Analysis, the Cloud, and other tools.
benedictralph benedictralph 3/18/2017 05:12
@cythlord

"...more and more tools for engines to solve these puzzles (although solving puzzles was never the intent in the first place)"

Exactly. Just like a human grandmaster may find difficulty (or lack of interest) with a class of exotic chess puzzles that can only be fully appreciated by a very small minority of master composers. The human grandmaster was never trained for such things and should not be judged based on his ability to solve them. Yet another double standard between humans and computers.
fgkdjlkag fgkdjlkag 3/18/2017 09:47
@Chvsanchez claims that the problem given is wrong! Why no comments on that?

I tried the problem he mentioned using stockfish and after 5 minutes it thinks that 1. Qg2?? is the best move. A human would quickly spot the solution. Heuristics for that kind of position would not be trivial to implement in a computer.
polkeer polkeer 3/18/2017 04:04
@benedictralph

"[C]omposing some exotic position proves nothing against chess software, much less AI or artificial consciousness in general."

The Penrose Institute don't try to prove anything against chess software, but try to use this example to study and understand the consciousness through physics. The problem of such reductionism is that according to it, you must believe that in analysing your computer and inside his screen, you will understand how is made the youtube video you are looking.
benedictralph benedictralph 3/19/2017 12:22
@polkeer

What are you talking about? It says right at the top of this article the following claim:

"to defeat an artificially intelligent (AI) computer but be solvable for humans"

Really? ANY artificially intelligent computer? How about one programmed to solve this type of position?