A chess problem holds the key to human consciousness?

by Frederic Friedel
3/16/2017 – That, in fact, is what the newly founded Penrose Institute is suggesting. The founder, famous mathematician Sir Roger Penrose, has composed a problem devised "to defeat an artificially intelligent (AI) computer but be solvable for humans". The latter are asked to submit their solutions and share their reasoning. But the position itself and the logic behind the experiment is not compelling. Still, you may enjoy checking it with your chess engine.

ChessBase 14 Download ChessBase 14 Download

Everyone uses ChessBase, from the World Champion to the amateur next door. Start your personal success story with ChessBase 14 and enjoy your chess even more!


Along with the ChessBase 14 program you can access the Live Database of 8 million games, and receive three months of free ChesssBase Account Premium membership and all of our online apps! Have a look today!

More...

The story was broken by Sarah Knapton, Science Editor of The Telegraph, who put it up in her newspaper. In it she reported on the launch of the new Penrose Institute, founded by mathematics professor Sir Roger Penrose, who gained world-wide renown in 1988 by working out black hole singularities together with Stephen Hawking (the two received the Wolf Prize in Physics for that). I was unable to find the original chess artical on the Penrose Institute site, but Sarah Knapton quotes extensively from it:

The chess problem – originally drawn by Sir Roger – has been devised to defeat an artificially intelligent (AI) computer but be solvable for humans. The Penrose Institute scientists are inviting readers to work out how White can win, or force a stalemate, and then share their reasoning. The team then hopes to scan the brains of people with the quickest times, or interesting Eureka moments, to see if the genesis of human ‘insight’ or ‘intuition’ can be spotted in mind.

Can you solve the puzzle?

Scientists from the Penrose Institute want to hear from you if you've cracked it. They write:

The puzzle above may seem hopeless for White, with just a king and four pawns remaining, but it is possible to draw and even win. Scientists have constructed it in a way to confound a chess computer, which would normally consider that it is a win for Black. However an average chess-playing human should be able to see that a draw is possible.

A chess computer struggles because it looks like an impossible position, even though it is perfectly legal. The three bishops forces the computer to perform a massive search of possible positions that will rapidly expand to something that exceeds all the computational power on planet earth.

Humans attempting the problem are advised to find some peace and quiet and notice how the solution arises. Was there a flash of insight? Did you need to leave the puzzle for a while and come back to it? The main goal is to force a draw, although it is even possible to trick black into a blunder that might allow white to win.

The first person who can demonstrate the solution legally will receive a bonus prize. Both humans, computers and even quantum computers are invited to play the game and solutions should be emailed to puzzles@penroseinstitute.com.

Read the full Telegraph article here

The Telegraph report was picked up by a number of media outlets, like this one (in Mashable). There Lance Ulanoff writes:

It’s hard to imagine how the game got here—it's even harder to imagine what happens next, let alone a scenario in which four white pawns and a white king could play to a draw, or even win this game. Yet: scientists at the newly-formed Penrose Institute say it’s not only possible, but that human players see the solution almost instantly, while chess computers consistently fail to find the right move.

“We plugged it into Fritz, the standard practice computer for chess players, which did three-quarters of a billion calculations, 20 moves ahead," explained James Tagg Co-Founder and Director of the Penrose Institute, which was founded this week to understand human consciousness through physics. "It says that one side or the other wins. But," Tagg continued, "the answer that it gives is wrong."

True. Above is the calculation displayed by the oldest engine I have installed on my notebook. Fritz 13 scores the position as 31.72 pawns ahead for Black. On ChessBase India Sagar Shah checked it out with Houdini 5.01 Pro 64 bit, down to 34 ply in a four-line search. Result: 24.91 pawns ahead for Black.

It is true that chess engines will display high scores in favour of Black, due to the material advantage of a queen, two rooks, three bishops and a pawn. What they are saying is that Black has a huge material advantage, one that should result in a win (–+). And they will keep moving their bishops, displaying a high positive evaluation right until the 50-move rule approaches and they see there is no possibility of forcing a pawn move by White. Maybe some of our readers can play out the position and tell us when top engines see the futility of continuing to move and display an eval = 0.00.

Interestingly, when I remove two black bishops my ancient Fritz 13 sees the draw in mere seconds. If I remove just one bishop it does not come up with a 0.00 evaluation in a reasonable amount of time.

But now we come to the humans, who can indeed work things out in a flash: the position is extremely contrived, and so the first thing you do is work out that Black has no legal moves except with his bishops. All White needs to do to defend the position is not to capture a black rook and not move the c6-pawn. He simply moves his king around, mainly on the white squares, and lets Black make pointless bishop moves. Absolutely nothing can go wrong. Once again we ask the owner of very old chess engines to check whether any of them will capture a rook, in order to reduce the material disadvantage slightly – but in the process lose the game.

On the other hand the contention that "it is even possible to trick Black into a blunder that might allow White to win" seems extremely far-fetched. Black would need to move his bishops out of the way, while White advances his king to protect the c-pawn, which then promotes (e.g. 1.Kf3 Be1 2.Ke4 Bc1 3.Kd5 Ba1 4.Ke6 Bec3 5.c7 Kb7 6.Kd7 Bf4 7.c8=Q#), but that is not White tricking Black, it is some kind of pointless helpmate.

Anyway, it is trivially easy for White to hold the draw, and the Penrose Institute will probably receive hundreds of correct solutions submitted by average chess players. The scientists say they are interested in the thought process that lead people to the solution – a sudden moment of genius, or the result of days of consternation? "If we find out how humans differ from computers, then it could have profound sociological implications," Penrose told The Telegraph. Really?

There are much more elegant positions and more profound examples that show the difference between human and computer thinking. Back in March 1992 I published the following study in a computer magazine, as a challenge for any machine to get it right:

[Event "La Strategie / CSS 3/92-29"] [Site "?"] [Date "1912.??.??"] [Round "?"] [White "Rudolph, W."] [Black "White to play and draw"] [Result "1/2-1/2"] [SetUp "1"] [FEN "3B4/1r2p3/r2p1p2/bkp1P1p1/1p1P1PPp/p1P1K2P/PPB5/8 w - - 0 1"] [PlyCount "11"] [EventDate "1912.??.??"] 1. Ba4+ $1 Kxa4 (1... Kc4 2. Bb3+ Kb5 3. Ba4+ Kc4 $11) 2. b3+ Kb5 3. c4+ Kc6 4. d5+ Kd7 5. e6+ Kxd8 6. f5 1/2-1/2

You probably know that you can switch on an engine on our JavaScript board (and move pieces to analyse). You can maximize the replayer, auto-play, flip the board and even change the piece style in the bar below the board. At the bottom of the notation window on the right there are buttons for editing (delete, promote, cut lines, unannotate, undo, redo) save, play out the position against Fritz. Hovering the mouse over any button will show you its function.

Fritz & co. display an eight-pawn disadvantage for White. The correct first move is to sacrifice even more material, which is the only way to secure a draw. This is a much more relevant test, as chess engines, playing the white side, will actually select the wrong strategy and lose the game. In the Penrose position computers with "think that White is losing", but they will hold the draw without any problem (I say this without having tested older engines and trying to entice them into capturing a rook and losing the game).

This little recreational pastime of taking the mickey out of chess playing computers has a long history, which will be told at a later stage. I must admit: it is getting harder and harder as these thing get stronger and stronger.


Topics chess problem

Editor-in-Chief of the ChessBase News Page. Studied Philosophy and Linguistics at the University of Hamburg and Oxford, graduating with a thesis on speech act theory and moral language. He started a university career but switched to science journalism, producing documentaries for German TV. In 1986 he co-founded ChessBase.
Discussion and Feedback Join the public discussion or submit your feedback to the editors


Discuss

Rules for reader comments

 
 

Not registered yet? Register

gambitg1 gambitg1 4/25/2017 03:12
Artificial intelligence is not same as specific rules. Of-course you can write if-then-else code for this position. That is not artificial intelligence.
You and I did not see this position before but our brain circuitry can more objectively evaluate it.
The mathematician's point is how much work has to be done before AI can be comparable to natural intelligence.
milignus milignus 3/20/2017 06:28
-.- There are people who often judge chess computers poorly. Conventional engines "are not usually designed" for such problems. For example, Stockfish. The developer community of SF is only interested in the Elo of the engine. These people do not care whether SF solves artistic problems or finds checkmates sooner, if it does not contribute to the engine strength. That even considering SF is FLOSS | Free Software | Open source and its people do not have commercial pretensions, I mean, They are not interested in publicity.

For these kind of engines, fortresses are an Achilles' heel. Getting the engine to understand many kinds of fortresses without negative impact on the Elo is difficult, or in other words, both things may be inconsistent. I know Chiron is one of the best in fortresses, You should give him a try to check whether his assessment is more accurate.

On the other hand, I very much doubt that a decent engine allow to be checkmated with Black here. I speculate that the creators of the puzzle would have used some poor chess app that did commit suicide, in order to be able to say that it is possible to be checkmated due to a computer blunder. If that is not true, then it's yellow journalism. To speak about the limitations of computers intelligence to play chess would require to be "more specific", because in practical chess, the limitations of the human intelligence are considerably disproportionate compared to the first ones mentioned.
milignus milignus 3/20/2017 06:27
-.- There are people who often judge chess computers poorly. Conventional engines "are not usually designed" for such problems. For example, Stockfish. The developer community of SF is only interested in the Elo of the engine. These people do not care whether SF solves artistic problems or finds checkmates sooner, if it does not contribute to the engine strength. That even considering SF is FLOSS | Free Software | Open source and its people do not have commercial pretensions, I mean, They are not interested in publicity.

For these kind of engines, fortresses are an Achilles' heel. Getting the engine to understand many kinds of fortresses without negative impact on the Elo is difficult, or in other words, both things may be inconsistent. I know Chiron is one of the best in fortresses, You should give him a try to check whether his assessment is more accurate.

On the other hand, I very much doubt that a decent engine allow to be checkmated with Black here. I speculate that the creators of the puzzle would have used some poor chess app that did commit suicide, in order to be able to say that it is possible to be checkmated due to a computer blunder. If that is not true, then it's yellow journalism. To speak about the limitations of computers intelligence to play chess would require to be "more specific", because in practical chess, the limitations of the human intelligence are considerably disproportionate compared to the first ones mentioned.
Peter B Peter B 3/20/2017 03:40
All this shows is that chess programs are faulty, in the sense that they can't handle a few exceptional cases. It is, in principle, not hard to extend chess programs to handle these situations. And when they do, it won't mean they are conscious.

As for the example, is the requirement a draw or a stalemate - the article is not clear. Draw is trivial (just never move a pawn). I can't see a stalemate.
benedictralph benedictralph 3/20/2017 02:05
@polkeer

So where does this chess problem they created fit in? I see no connection at all.
polkeer polkeer 3/19/2017 03:03
Here, @benedictralph, the quotation out from the Telegraph's article about from what I'm talking about:

"The new institute, which will have arms at UCL and Oxford University, has been set up to study human consciousness through physics and tease out the fundamental differences between artificial and human intelligence."
benedictralph benedictralph 3/19/2017 01:47
@truthadjustr

"The Penrose institute has correctly nailed it, and only shows that chess engines only rely on brute force with priority on material advantage rather than positional understanding"

They didn't "nail" anything. If that's what they are claiming, it's something designers of chess engines and most people in AI have known for years. Programmers have, in fact, tried to code primarily for "positional understanding" but then the program ends up like human masters... often making more mistakes than if brute force searching and material gain is prioritized. The proof is in the pudding. All things considered, chess engines today are simply better chess players than humans, even though occasionally a human player might brag he could solve a very very weird position faster.
Miguel Illescas Miguel Illescas 3/19/2017 01:05
https://twitter.com/illescasmiguel/status/843243317156548608
truthadjustr truthadjustr 3/19/2017 12:40
This position, and the computer engines being unable to evaluate correctly reveals a lot to how computer engines are made. I realized, engines are meticulously programmed to evaluate balanced positions and any imbalances in the position would caused it to sway away from the search horizon and never ever coming back. To the engine, the deed is done and nothing more needs to be done - the position is now correctly evaluated. Chess engines are not really optimized to understand a position, but instead it will greedily take on whatever is on the table, it will eat it right away because it looks delicious. So far, the Komodo chess engine is the most positional and less greedy of them all, but it still is unable to correctly evaluate the position. The Penrose institute has correctly nailed it, and only shows that chess engines only rely on brute force with priority on material advantage rather than positional understanding.
benedictralph benedictralph 3/19/2017 12:22
@polkeer

What are you talking about? It says right at the top of this article the following claim:

"to defeat an artificially intelligent (AI) computer but be solvable for humans"

Really? ANY artificially intelligent computer? How about one programmed to solve this type of position?
polkeer polkeer 3/18/2017 04:04
@benedictralph

"[C]omposing some exotic position proves nothing against chess software, much less AI or artificial consciousness in general."

The Penrose Institute don't try to prove anything against chess software, but try to use this example to study and understand the consciousness through physics. The problem of such reductionism is that according to it, you must believe that in analysing your computer and inside his screen, you will understand how is made the youtube video you are looking.
fgkdjlkag fgkdjlkag 3/18/2017 09:47
@Chvsanchez claims that the problem given is wrong! Why no comments on that?

I tried the problem he mentioned using stockfish and after 5 minutes it thinks that 1. Qg2?? is the best move. A human would quickly spot the solution. Heuristics for that kind of position would not be trivial to implement in a computer.
benedictralph benedictralph 3/18/2017 05:12
@cythlord

"...more and more tools for engines to solve these puzzles (although solving puzzles was never the intent in the first place)"

Exactly. Just like a human grandmaster may find difficulty (or lack of interest) with a class of exotic chess puzzles that can only be fully appreciated by a very small minority of master composers. The human grandmaster was never trained for such things and should not be judged based on his ability to solve them. Yet another double standard between humans and computers.
cythlord cythlord 3/18/2017 03:23
@Toastmaster: Using the "chessbase cloud" analysis, we can see that an engine, in fact has found the Kh2!! move and actually calculated it to mate, so just because engines don't see it immediately doesn't mean they will never see it. Nowadays there are more and more tools for engines to solve these puzzles (although solving puzzles was never the intent in the first place), such as Houdini's Tactical Analysis, the Cloud, and other tools.
benedictralph benedictralph 3/18/2017 01:23
@Amtiskaw_

"Their point is that neither a human nor a chess playing computer will ever have encountered such a position in a real game, yet a human with average chess ability can still work out the solution without difficulty."

The truth is, the vast majority of humans with or without chess playing ability (though they all have "intuition") would not be able to find the actual solution either. Secondly, why is there this assumption that the minority of good players who could indeed find the actual solution did so "easily"? Last I checked the brain is an extremely complicated black box which we have virtually no idea how it really works and arrives at decisions. What we do know, however, is that it is capable of making an inordinate number of mistakes (and often does). This is simply unacceptable with computers so the two should not be compared or the human brain put on some kind of pedestal. At the end of the day, computers still can beat pretty much 100% of human chess players under any time control "without difficultly", including the world's best. What does that say about "intelligence" now that we've conveniently moved the goalpost to "intuition" when it comes to AI?
Peterdes Peterdes 3/17/2017 10:21
Well, the deep problem of the article is the misinterpretation of the puzzle. The puzzle says that white has to find a way to "force black into stalemate or win". 50-move rule is _not_ a stalemate (even if it is a draw!), so the above "solution" is simply invalid.
domnul_goe domnul_goe 3/17/2017 08:42
1. e4 c5 2. Nf3 Nc6 3. d4 d6 4. b3 Nf65. Bd2 h5 6. Bb4 cxb4 7. Na3 bxa38. c4 Qa5+ 9. Ke2 h4 10. d5 h311. dxc6 Rh5 12. Ne1 Rb5 13. Nd3Rb4 14. Qd2 Ra4 15. Nc5 dxc516. Qb4 cxb4 17. Rd1 hxg2 18. h4 Bh319. Rxh3 Rd8 20. h5 Rd5 21. h6 Kd822. Rh5 Kc7 23. f4 Rb5 24. Rd6 exd625. Rc5 dxc5 26. e5 g1=B 27. Bh3 Kb628. hxg7 Bxg7 29. Kd3 Ng4 30. Ke4f5+ 31. Kf3 Bh2 32. Bxg4 fxg4+33. Kf2 g3+ 34. Ke2 g2 35. Kd3 g1=B36. Kc2 Be3 37. Kd3 Bexf4 38. Ke2Bgxe5 39. Kd3 Ka6 40. Ke4 b6 41. Kd3Bhg3 ( draw or… ) 42. Ke4 Bf6 43. Kf5 B4g5 44. Ke6B3h4 45. Kd7 Be7 46. Kc8 Bef647. Kb8 Be7 48. c7 Bef6 49. c8=B#
Amtiskaw_ Amtiskaw_ 3/17/2017 05:49
Those complaining that these sorts of positions are unrealistic are entirely missing the point that Penrose and co. are making. It is the artificial, impossible nature of these positions that is the most important thing.

Their point is that neither a human nor a chess playing computer will ever have encountered such a position in a real game, yet a human with average chess ability can still work out the solution without difficulty. This suggests that their is some kind of imaginative analysis from first principles that a human is capable of performing, but machines, at present, cannot. Yes, you could adjust the programming of software to recognise particular strange situations, but this isn't the same as the software inherently having the imagination to work outside of its defined utility function.

This might all seem a little quixotic, but in fact it goes to the heart of current AI research, which is heavily based on the idea of "reinforcement learning". This involves training an AI by having it play a few million games and refining a neural network (e.g. how AlphaGo works). Yet these problems suggest a human chess player has a form of intelligence that is *not* based purely on reinforced learning, because they will not have encountered the problem before, yet can still solve it.
delax001 delax001 3/17/2017 05:04
I think we are diverging in this thread. Discussion is moving in the direction on how useful/practical would be to include the evaluation criteria that would allow to recognize these position into a regular/commercial chess engines. And this can be argued at length. There are ROI arguments, as well as debate where we want to spend the millions of positions per second calculating capabilities.
But the initial statement still remains incorrect, IMHO: "a problem devised to defeat an artificially intelligent (AI) computer but be solvable for humans". There is no doubt that chess engines are capable to find the correct solution to these problems, if programmed accordingly.
vishyvishy vishyvishy 3/17/2017 11:22
White moves king on all available white squares with almost speed of light... and black plays like drunk , slow like snail ... Black doesn't notice his clock is ticking... keeps playing slowly... due to this BLUNDER then before 50 moves blacks time runs out ...Black flags and white wins!... otherwise anyway if black is awake then white gets draw due to 50 move rule! :) So it is a win or draw situation for white ...proved legally!
benedictralph benedictralph 3/17/2017 11:14
@WildKid

I have yet to see a real-world game position that a modern chess engine couldn't handle. Where, for example, it totally blundered and lost (like human grandmasters often do, to say nothing of the vast majority of average human players). Even in such cases, I suspect the problem could be remedied by fine-tuning some of its heuristics. You know, like in a future build of the same program released the next year? This is why composing some exotic position proves nothing against chess software, much less AI or artificial consciousness in general. Now, if a computer could "come up" with original ideas or things on its own... that would be a step in the right direction and worth investing in. But in chess, what's really left to discover that's worth investing millions and years of work in?
WildKid WildKid 3/17/2017 11:00
@benedictralph

The Penrose position is so artificial that it is unlikely to occur: however, the second position is of a general type that occurs quite often in 'fortress' endgames where neither party can cross a pawn barrier. The deciding game in the Womens' World Championship won by Tan Zhongyi had a little of this 'pawn fortress' flavor. There would definitely be a 'real-world' benefit to recognizing the type.
JiraiyaSama JiraiyaSama 3/17/2017 09:49
To the people who are thinking Roger Penrose has no idea about chess: He has a brother by the name Jonathan Penrose, who is a Grandmaster. I think he proposed this position intentionally.
benedictralph benedictralph 3/17/2017 09:21
@WildKid

It's not so easy. Every new principle or heuristic that is added to a chess program will have to apply to literally millions of positions analyzed every second. The computer cannot afford to be "selective" about when to apply them or "trust its gut". So a point of diminishing returns is quickly reached. The position in this case is also EXTREMELY unlikely to occur in a real game. Possibly it will NEVER occur in a million years. Unlike human grandmasters, when a computer makes a single mistake it is thoroughly condemned as being totally useless or flawed. When a grandmaster misses something though, it is merely an "oversight" due to fatigue or stress or something like that. So why risk it?
sjb sjb 3/17/2017 08:56
Interesting that the idea is found with some older programmes when a bishop or two is removed. How about with the converse - can we add more bishops (may take some moves away but add others) - via more promoted pawns etc
WildKid WildKid 3/17/2017 08:36
Several commenters have said that there is no real world advantage to programming computers to recognize these types of positions. I disagree.I think it's important to recognize in principle limiting characteristics of positions: for example, 'If Black manages to advance that pawn, he will have a fortress that White can never find a way through.' This is neither a positional judgment, nor a tactical one: it's an in principle long term property of the position, of the same type as the one the computers arguably miss above. Being able to recognize such positions and either go for, or avoid, them, would definitely make the computer a stronger player.
benedictralph benedictralph 3/17/2017 05:06
@Toastmastergeneral

That's probably because the engine "pruned" that part of the move tree based on the heuristics that were programmed into it. The move in question probably seemed not worth considering based on said coded heuristics (until it was played). On the flip side, how many strong lines have human grandmasters missed that they only found out through the use of engines? A whole lot more, I suspect.
Toastmastergeneral Toastmastergeneral 3/17/2017 04:47
Last I checked, engines can't find Nigel Short's "king walk" against Jan Timman from 1991. My engines don't see Nigel's brilliant 31.Kh2!! which kicks off the king walk. Once you plug the move in, the engines see the win instantly.
benedictralph benedictralph 3/17/2017 03:17
@fgkdjlkag

Of course the computer does not "realize" the position is drawn because it is not programmed to "realize" such an unlikely event. Why is there always this assumption that AI programs are or should be designed as some kind of "general purpose" machine? They are NOT. General purpose machines tend NOT to be very good at anything. Just like a jack of all trades. Analogously, if computers were conscious, they might laugh at human grandmasters simply being unable to see an "obvious" forced mate 15 moves ahead which they could see in 15 seconds. Would this "prove" that humans are stupid?
Chvsanchez Chvsanchez 3/17/2017 12:34
The diagram, in fact, shows the end of the solution. The real problem has the white pawn on d5 and a black knight on c6, the black bishop on h2 and a white queen on g3. The solution is 1.dxc6! sacrificing the queen, after 1...Bxg3 a computer will believe it's winning but in fact it's a draw.
moonsorrow55 moonsorrow55 3/16/2017 11:45
And if the solution somehow involves white WINNING instead of just a draw, black just puts the bishop on c7 and stops any tricks by white involving pushing the c pawn. Then white is left with nothing except king moves which are all drawn, or captures with his remaining pawns which appear to clearly lose in all lines.
moonsorrow55 moonsorrow55 3/16/2017 11:42
Super-easy for white to draw here, literally every move that isn't pawn to c6 draws.
fgkdjlkag fgkdjlkag 3/16/2017 11:04
I think the point is that while the computer could be programmed to be able to solve this kind of the position, what they are testing is that even though the computer can look ahead hundreds of millions of moves, it does not realize the position is drawn. It is more the conclusion with this horizon and not the specific ability of the computer they are looking at.
ewenardus ewenardus 3/16/2017 09:44
Isn´t there a mistake in the instructions? I thought stalemate is when your king can't move without entering check - this is a draw beause of the 50 move draw rule and not stalemate. It should be white ti win or draw, not win or get stalemated.
amandas amandas 3/16/2017 09:10
the black can only move the bishops, so the King Will do 50 moves and the position not gonna change. will be draw cause of 50 moves
vladmirsicca vladmirsicca 3/16/2017 06:57
How does a engine behave if the white king starts at b1 instead? Does it suicide in order to decrease white's material disadvantage drastically?
jajalamapratapri jajalamapratapri 3/16/2017 06:46
PS I'm sure I've seen similar fun positions when I was a kid half a century ago, or made them myself (any idiot can do that) it is not "invented" by Penrose.
jajalamapratapri jajalamapratapri 3/16/2017 06:44
Stockfish "solves" this easily in that it plays king moves until the 50 moves rule gets the draw. I'm sure any other engine will do the same.
vgn2 vgn2 3/16/2017 06:40
Very surprised at the depth of the solution from such a great mathematician/physicist/writer!
Should have got it reviewed by some strong players before publishing...
delax001 delax001 3/16/2017 06:05
Computers do not program themselves, not so far. The above examples only prove human-laziness from the engine programmers in including such criteria into the evaluation function. They are extremely corner cases, with almost no practical use. In no way or form these cases put into question that the search-evaluate process that computers use, at always growing performance rates, has widely surpassed the threshold where they are able to defeat the the human chess-playing approach, based mostly on pattern recognition, reasoning and limited "calculation".