Winning starts with what you know
The new version 18 offers completely new possibilities for chess training and analysis: playing style analysis, search for strategic themes, access to 6 billion Lichess games, player preparation by matching Lichess games, download Chess.com games with built-in API, built-in cloud engine and much more.
Recently I did what every reasonably intelligent, inquisitive person should do: I downloaded a bunch of files from the Sam Harris Making Sense Podcast page and stored them on a micro-SD card. This I inserted into my car radio, and now, while driving, I listen to very sensible discussions instead of news repeats, and traffic or weather reports. You should try it — there is an endless supply of material on Sam’s podcast page.
One of the podcasts, 164 labeled "Cause & Effect", caught my attention: it was a philosophical discussion about the necessity of making robots communicate with us in our language, to to behave like us, through the mathematization of cause and effect. Harris' interviewees an 83-year-old professor of computer science and philosophy, and the two discuss conditions that can lead to consciousness, to self-awareness in machines. In the end the case of chess playing computers comes up – as the "embodiment of all the philosophical ideas". This is highly appropriate in view of the development of self-learning systems like AlphaZero, Leela and Fat Fritz. I have transcribed this section for our readers — the machine version is hardly readable. Here AI speech recognition has a fair way to go.
But before we start let me introduce the discussion partners:
Sam Harris received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA. He is the author of five New York Times best sellers. His books include The End of Faith, Letter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up, and Islam and the Future of Tolerance (with Maajid Nawaz). His writing and public lectures cover a wide range of topics — neuroscience, moral philosophy, religion, meditation practice, human violence, rationality — but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live. Sam’s work has been published in more than 20 languages. He hosts the Making Sense Podcast, which was selected by Apple as one of the “iTunes Best” and has won a Webby Award for best podcast in the Science & Education category.
Judea Pearl [the photo above is an Edge video grab from 2016], is an Israeli-American computer scientist and philosopher, best known for championing the probabilistic approach to artificial intelligence and the development of Bayesian networks. He is also credited for developing a theory of causal and counterfactual inference based on structural models. Pearl is a professor of computer science and statistics and director of the Cognitive Systems Laboratory at UCLA. In 2011, he received the Turing Award, the highest distinction in computer science. He is the author of The Book Why: The New Science of Cause and Effect (coauthored with Dana Mackenzie) among other titles.
Judea Pearl, and this needs to be mentioned, is the father of the journalist Daniel Pearl, who was kidnapped and beheaded by terrorists connected with Al-Qaeda and the International Islamic Front in Pakistan in 2002 — basically for his American and Jewish heritage. Read his story and visit the Daniel Pearl Foundation.
In the following episode of the Making Sense podcast, Sam Harris speaks with Judea Pearl about his work on the mathematics of causality and artificial intelligence. They discuss how science has generally failed to understand causation, different levels of causal inference, counterfactuals, the foundations of knowledge, the nature of possibility, the illusion of free will, artificial intelligence, the nature of consciousness, and other topics.
Listen to the podcast discussion between Sam Harris and Judea Pearl
The section on chess starts at 1:28:10, but it is well worth-while to listen to the full discussion on AI and self-conciseness. Like most of the Sam Harris podcasts it is definitely worth the time.
Sam Harris: There is a very strong opinion on the part of many people that the best chess engines on earth — take AlphaGo at the moment — have no concept of chess, they have no understanding of of it, that they're playing a game, though they're doing it better than any person who ever cared about chess ever has. There's no chess in that, there’s no experience of chess, there’s no notion of chess.
Judea Pearl: There are different ways of winning chess, one of them is brute force...
SH: With or without brute force, even if we had a more intelligent chess program still, the brute force ones we have are better than the human ones we have…
JP: So does it mean that they don't understand chess the way we do?
SH: What does it mean to say that the algorithm that is producing the best chess play understands chess?
JP: They would not be able to write a chess commentary, because when they have to explain the move all they will tell you “I looked ahead, and I looked ahead again, and I came to a conclusion” — and that is not what the New York Times will publish. The New York Times will publish a commentary in the following way: “I looked at the center and I saw that I lost control over the center. So I decided to make a sacrifice, sacrifice my queen.” That is what we mean: you don’t communicate in my language, in my conception of the strategy. I have a different strategy than you. But it doesn’t mean you don’t understand chess.
SH: So there is there are many ways of dissecting out the variable of consciousness from a performance even like in vision, say, especially true with motor behaviour. But even in vision there's this phenomenon called blindsight where you can have it an injury to the occipital cortex, your vision cortex, primary visual cortex, and you could have a region of your visual field where you subjectively — you the subjective conscious person, think you are blind. But we can test whether you can see anything there, and the truth is you can predict let’s say the orientation of a line with, you know, 95% accuracy and yet your experience is of being blind. So your experience says you're just guessing successfully, but you're answering some of the criteria of vision which is you are successfully getting this information from the world such that you know, you don't consciously know how the line is oriented, but you in fact can guess correctly. So that that breaks apart the phenomenology, the conscious part, from the intelligence, the information processing part, at least for this example. But there are many other examples. It becomes more and more obvious in in complex motor tasks like like athletics, where you are learning to do something for the first time, like hit a golf ball, and all of that effort is conscious, and it's a terrible experience of repeated failure. But once you start getting good, then you're losing your sense of how you do any of it, and then it becomes unconscious, when is truly successful, and when consciousness begins to intrude, later on in the process, you learn something new, something some golf instructor tells you to do, you actually can get worse because it is disrupting your this this unconscious motor routine. So the question is perhaps we could build a golf playing robot or a chess-playing robot or an autobiographical, you know, speech robot, you know, a robot that will tell you what it was like to be a little robot. That would be never associated with consciousness. All of those performances would be successful. You're faking it successfully, but you are faking it.
JP: When do we go through the transition, from System Two to System One (I am using Kahneman for that) from the reasoning power to the automatic and heuristic power — when we are facing a new environment we have never seen before and we did not have any experience — and then we have to reason. And then we go through a transition period which is called “acquired expertise.” Experts have a difference from non-experts, who have to think about things whereas an expert has it explicitly stored. … But the two of them working together… in chess you can see it so beautifully: reasoning forward, what will happen if I move that piece … that is the System Two, the reasoning part. In System One the master chess player looks at a game and says you are in a bad position, I must make this move, without thinking ahead.
I love chess, not as a player but as a programmer — I wrote a book about heuristics. The chess game is the embodiment of all the philosophical ideas about System One and System Two. It is all embodied in something that is so easy to program, so easy to understand.
So what is the intuition part of chess. That is the evaluation function you put on a chess position: material advantage, controlling the centre, I’ve castled already, things of that sort. I look at a chess position and have an intuition it's all good. Shannon had a brilliant idea: he was the first one to say let’s marry the two, heuristics and reason. Look ahead, and when you come to the horizon of your search, load it with your intuition, with your dirty evaluation of the board. And then roll back and see which move gives you the best chance of winning. It was a first marriage of the two components: heuristics in one hand and reasoning on the other hand. Beautiful combination, and I think many conversations about the interplay between these two software studies can be nicely demonstrated in chess. That's why I like it is as a metaphor. Just imagine a hypothetical program in which you store a number for every chess position. I don't even have enough and molecules in the universe to store it. But imagine you have computed it and every chess position as a number. How good it is? Everything is done. It is faking it, that's what you would call a faking chess playing program.
SH: Just please make sure I understand: there is in fact a finite number of chess games, but it’s an astronomically large number that we could never tally them all. But if you could tally them all the entire problem of chess would be solved. From any board position you would be able to trace whether the next move wins..
JP: That’s right. There is a characterization every chest position: is it a win, a draw or lose for the next player? We know it from basic principles that it exists, we can prove that it exists. We don’t know how to label it, but we can hypothesize that a huge program can create a table for you. So for every chest position you have win, loss or draw. Your’re done.
SH: Would that be by definition true at the beginning of the game, that white would always win?
JP: We don't know by the way.
SH: We don't know? Intuitively it seems like a first-mover advantage would be decisive in that universe.
JP: It may be a draw. It may even be a loss.
SH: That would be fascinating to play and adjudicate. So, is it possible that in a world where chess is solved? Which is to say we know what perfect play is for each side of the board? Is it possible that White loses?
JP: Sure, because you have to move.
SH: What would you put the probability at?
JP: I think it is very small, but based on what?
SH: Okay, if your fate, or the fate of humanity, depended on you choosing White or Black, in a game of perfect chess…
JP: I’d choose White.
On this final point I (FF) would like to mention that I would not choose White to win in a game of chess in which both sides have perfect play. I predict it will be a draw, and am pretty sure of this is correct. I described why in a slightly playful vein in my article How God plays chess. But there is another prediction I would like to throw out: I am coming to the conclusion that the maximum Elo rating for any player of chess – whether (super-)human or artificial, brute force or neural network, electronic or whatever – is 4000. Maybe 4100, but not more. Perhaps you would like to speculate why I believe this to be so. Please post your ideas, and also your opinion of the Sam Harris/Judea Pearl discussion, in our feedback section below.