Garry Kasparov on Artificial Intelligence and New Technologies

January 15, 2018 (Episode 102)

Taped December 13, 2017

Table of Contents

I: Chess and Artificial Intelligence 0:15 – 16:34
II. Technology and Politics 16:34 – 39:57
III. Technology and Education 39:57 – 59:54

I: Chess and Artificial Intelligence (0:15 – 16:34)

KRISTOL: Hi, I’m Bill Kristol, welcome to CONVERSATIONS. I am very pleased to be joined again today by Garry Kasparov, great chess champion, democracy activist, and author of a recent book on machines, Deep Thinking – I think that’s what it’s called?

KASPAROV: Yes.

KRISTOL: So, I mean, I know very little about this, so tell me, should I be excited that machines are going to transform humankind and save us from all kinds of things? Should I be terrified that they’re going to de-humanize our lives? What’s the truth about artificial intelligence and machine learning and all this? To oversimplify it.

KASPAROV: No, I want you to be excited, but I’m always concerned when people make these two very distinctive propositions. Should I be excited? It’s more like a machine, artificial intelligence – actually, I prefer augmented intelligence, because we still don’t understand exactly what AI means. If you ask ten experts about AI, the meaning of AI, you may end up with eleven different answers.

So on one side you have these preachers – salvation, the only way to change our life in the future.  And the much bigger group on the other side, talking about Pandora’s Box, opening gates of Hell.

I mean, let’s be pragmatic. Let’s talk about new technology. It’s a new tool; it’s a very powerful tool. Obviously it’s different from what we had before because now it covers the area which we believe was predominantly human, only human: it’s cognition. But, it’s still a tool.

And I might be called an optimist, but – if you look at this simple classification – but I consider myself a pragmatist. Because I know from human history, from the history of civilization, from the history of technology, that every breakthrough technology threatens some jobs. It created problems.

And we also know that most of the technological breakthroughs, they ended up first as a weapon, as something destructive. Because it’s easier; it’s just to destroy something. Like, you know, the nuclear technology, you first to start was a bomb. And then we could actually figure out how to use this technology to benefit humanity.

AI, whatever it is, so, it’s not different. And people who just looked at say, my matches with Deep Blue twenty years ago, or recent games are played by newer machines in Go or in machines dominating the popular video game DotA. So that’s a program created by Elon Musk’s team.

Now, in Go, the AlphaZero program came out of Google’s lab, DeepMind, led by Demis Hassabis – it’s located in London. But they miss a very important point that, whether it’s chess, whether it’s Go, whether it’s DotA, in all these cases, though these games are different – and Go, for instance, is much more complex than chess; it’s more abstract, more strategic, so it’s more difficult to accumulate the sophisticated knowledge as we did in chess. But all these games, they represent closed systems. Which means, we humans decided on the framework, and we fill the machine with a target, with rules.

There’s no automatic transfer of the knowledge that machines could accumulate in closed systems to open-ended systems. Because machines can do many things, and they will learn even more things in the future, but I don’t see any chance in the foreseeable future for machines to ask the right questions. Now, they can ask questions, but they don’t know which questions are relevant. Which means that, if it’s an open-ended system, machines – the most powerful, the smartest, the most sophisticated one, will never recognize the moment where it just entered the territory of diminishing returns.

KRISTOL: So, yeah, explain that a little more – what makes the system? How do we know a system is an open-ended system? Couldn’t someone say, well, you’re just using this to –

KASPAROV: Open-ended system means that you don’t have the fixed rules. So you still – you need the target. For instance, in chess it’s about winning the game by mating your opponent’s king. So there are rules. And as we know from latest experiments, from AlphaZero algorithm when they use it for chess as well, starting from the scratch, from zero, just having the rules, within several hours the machine played 60 million games, or just an insane number of games, and it ended up with its own scale of reevaluation. It actually creates its own theory.  And we’re learning that some of the human knowledge accumulated over centuries, it was flawed. But –

KRISTOL:  Explain that a little more because people are freaking out a little bit about AlphaZero as a kind of next step in artificial intelligence or machine learning. Different from the brute force of –

KASPAROV:  Exactly. First of all, yes, it’s a next step. And if we go all the way back to the beginning of computer science, at the dawn of this new industry, the great pioneers of computer science, the founding fathers like Alan Turing, Norbert Wiener, or Claude Shannon, they all thought that chess could be one of the turning points, a watershed moment. If – actually, for them, it was when a machine would beat top players, including world champion, in chess, that will be the dawn of artificial intelligence.

And closed channels explain – it’s the difference between two types of machines. We call them Type A, “brute force,” and Type B, which would be more like “human-like” decision-making process. It’s quite ironic that Deep Blue, a machine that succeeded in winning this quest and beating a world champion, current world champion in 1997, it was a Type A machine.

KRISTOL: So it’s brute force; they programmed in every game that’s ever been played, basically?

KASPAROV: Yeah, they also had their coaches. They looked for the most sophisticated algorithms, the scale of valuation. Deep Blue was not strong by modern standards. Today, the free chess app on your mobile phone is stronger than Deep Blue. It’s a Moore’s Law. But in 1997, it was too strong to take it lightly and I was not well prepared.

If you look objectively at the strengths, I was probably still stronger. But this is – again, that’s another misunderstanding. When people talk about computers and chess they say, “Oh, machines solved chess.” No. Machines cannot solve chess because chess is, you may say, a mathematically infinite game. The number of legal moves, according to Claude Shannon, it is 10 to the 45th power. You cannot calculate that.

But it’s not about solving; it’s about winning. And winning means that you just have to make less mistakes than your opponent. And humans, even the best humans, they are poised to make mistakes. So, we are humans. We’re vulnerable, especially under pressure. As I was, the heavy pressure that I experienced during the match against Deep Blue, in both 1996, the first match I won, and in 1997, the one I lost. I played a few more matches afterwards, and, some other top players, they played matches. And it was a very short window when machines could play with humans and the result was still unpredictable. Because after the match with Deep Blue, I played two matches with Deep Junior, the Israeli program; Deep Fritz, it’s a German program. Both ended as a tie. Vladimir Kramnik, my successor as the world champion, he also played matches: one was a tie; one he lost.

The final moment was when Mickey Adams, the top British player, top ten at that point in 2005, he played against another supercomputer, Hydra. It’s an analog of Deep Blue but more sophisticated already. And he was totally crushed, making only one draw in six games.

So, also, this algorithm, this pattern, can be applied to anything else, any other area of human activity. First, machines, they are not just too weak – it’s impossible to imagine they will ever challenge humans. Then, there’s the beginning of the competition, machines look feeble. It’s not there. And then, there is this short window where we can compete; and, afterwards, machines are far superior forever afterwards.

KRISTOL: But then the AlphaZero breakthrough. So explain that. Why is that different?

KASPAROV:  Yes, but Alpha is – but then there was some attempts to actually create Type B machines in the ‘50s and ’60s.

KRISTOL: Type B being?

KASPAROV: Being more human-like. So that’s just to –

KRISTOL: Instead of brute force.

KASPAROV: Instead of simple brute force, trying to make assessments, not to rely on brute force. But all these machines, they had been obliterated by the brute force because the Moore’s Law helped to increase the power of calculation, and the algorithms were not there. So these few attempts in ‘50s, ‘60s, early ‘70s, they had been futile. And it ended up with total dominance of Type A machines.

Now, we also have to recognize the fact that even Type B machines, like that AlphaZero, they still rely on brute force. It is still an element. But now you could see the glimpse of what we may call “augmented artificial intelligence,” because it starts from scratch and it develops its own skill of evaluation. That is a fundamental difference.

KRISTOL: So AlphaZero looks at the rules of chess, and the goal of chess, and it doesn’t have your games or Bobby Fischer’s games or anything in its memory, so to speak? It just looks at the rules and figures out de novo what to do.

KASPAROV: Yes, exactly. When it happened in Go, and they had an AlphaGo program that destroyed the strongest Go players. And then they made an experiment of having AlphaGoZero, that is this new algorithm, playing AlphaGo. And they played 100 games, and AlphaGoZero won all of the games – I mean, all the games.

Now, I was quite shocked, but then I just, I came to the natural conclusion that, in Go, unlike in chess, because the game was so complex, the knowledge, the human knowledge was very inferior. In chess, you cannot expect Magnus Carlson or Garry Kasparov looking at the machine’s game saying, “Oh, I never thought of this move.” In Go, you heard it all the time, because Go players, with all due respect to them, they are more likely at the level of top chess players in the beginning of the 19th century. So there is nothing wrong about it; it is just – it shows the complexity of the game.

Now, in chess, we thought it would not be possible. Now what we learned from AlphaZero Chess is that even the knowledge that we believed – and I think rightly so – was very sophisticated, still can be challenged. Because starting from scratch, it ended up with some interesting evaluations. Because every computer has a scale of evaluation, which means that you put numbers on certain factors. For instance, king safety; control of the center; activity of the pieces. And that also indicates the character of a programmer.

So that’s why I always made a joke that it is not surprising that the German program is more strategic and less aggressive than the Israeli program. It’s a clear reflection. Because you have to add these numbers and to make sure that the machine will use the scale of evaluation to navigate in complicated positions where you cannot have the forced line to win or to draw. AlphaChess has developed itself – so, just from scratch.

KRISTOL: Does that mean there’s no, so to speak, bias? That is to say, there is no programmer being temperamentally aggressive or temperamentally cautious? There’s no such thing.

KASPAROV: Yes. It created it based on number of games, on the millions of games played. But ironically, and that’s another interesting thing, I would say contrary to our expectations, instead of being more solid, strategic, as we saw – that was a trend we saw with Type A, modern Type A programs, AlphaChess is far more aggressive.

So it plays very dynamic, very aggressive chess. Which means it has discovered interesting connections between what I call the triad, I call in my book How Life Imitates Chess: material, time, and quality. You sacrifice material, you gain some time and maybe some quality. And it requires an interesting analysis. But that’s what I predicted in Deep Thinking, in my latest book, is that with new machines, with these Type B machines, we will actually discover new patterns. Even in the territory that we thought was totally explored, still there’s things to learn. And, at the end of the day, it is not the end of the story.

So it is this, for me, just the beginning because, even with all of the phenomenal success of AlphaChess against other programs – it crushed strongest Type A programs – it’s still operating in the closed system. And I see, so far, I see no immediate transfer of this knowledge into open-ended systems.

KRISTOL: Yes, so I was going to ask about that: I mean, is the distinction between closed– and open-ended systems itself something that we’re putting too much stock in now and 20 years from now we’ll say – well, that was actually something they also were overcome? Or that is a fundamental distinction?

KASPAROV: It’s fundamental, it’s philosophical, because it’s about asking the question. So any question that’s why – this is very much human nature. So far, I see no indications that a machine could decide what questions are relevant. They can ask questions – I spoke to many experts. Those who were just, you know, spending their lives developing these algorithms. And while machines are capable of asking questions, but it ends up with –  still leaves us [in] a gray zone where human assistance is needed.

And that’s what I believe is the future, is the collaboration. Because no machine will ever reach the 100 percent of perfection. There will be always the gap that will require human assistance. Psychologically, it’s very challenging for humans to recognize the fact that we belong to the last few decimal places.

But there’s nothing wrong about it, because the brute force and just the overall machine’s power, it is so big and massive that if we know how to channel it, you know, just a little tweak, just a half degree left or one degree right, could make a hell of a difference.

And I think that we just have to start learning about these new forms of collaboration, and I would recommend that we stop crying, we stop, you know, just making this doomsday predictions, and I will concentrate on the positive side. So how many new great things we can achieve if we find the right way to collaborate with these machines.

Technology and Politics (16:34 – 39:57)

KRISTOL: So let me ask you first about sort of the economy, and then about political thinking. I mean, on the economy, I suppose, people are very worried that, you know, we’ll have self-driving cars and people will lose jobs and there’s no real replacement for those; we get more and more automation. Do you think that is exaggerated, that worry, or even foolish?

KASPAROV: Will jobs be lost? Yes. That is what always happens with new technology. But how can you stop the progress? You go back to the sort of beginning of our civilization and you could see that machines always replace humans. And how many millions, maybe tens, maybe hundreds of millions of jobs in manufacturing have been lost to automation in the last 100 years? And we thought it was a natural process.

The difference now is that machines are going after people with college degrees, political influence, and Twitter accounts. But, in the big picture, it is still the same pattern. And I think that every industry that is now under pressure from machines, from AI, from automation, it’s a natural process. If there is no pressure, it means stagnation.

And while the jobs will be lost, we should now start thinking about jobs that will be created. Because if we try to slow it down, it will not save jobs, but it will protract the agony and will prevent us from creating new industries and generating economic growth and creating some financial surplus that might help people who will be left behind.

I think also it’s – we’re dealing here with psychological stumbling blocks in the minds of an average person. People want to have all the benefits from technology, but they, you know, tend to ignore the fact that technology is agnostic. The device that everybody carries in his pocket or in her purse, it’s not good or bad. It could be used for good things; it could be used for very bad things. You can create a very powerful terrorist network with that.

And also, this technology creates benefits for the average person, but at the same time, destroys the traditional connections within our society. If not for new technology, for early diagnosis of terminal illnesses or new medicine, diet, people would live 75, 80 years, because 100 years ago the lifespan was 25, 30 years shorter. Now, this is ironic. So, thanks to the new technology, thanks to overall progress, we have more people who not just live longer, but they have a lot of energy. I am in my mid-50s, so that is my generation. We still have plenty of energy; we want to work. But the challenges that is created by technology are just making this generation that is still pretty active almost redundant compared to the young generation. So we have a generational gap.

And I think that it’s very important that we stop separating technology, societal problems, political problems, financial problems, and we start looking at the big picture. And the big picture is that technology offers us new opportunities, but has created many challenges. And if we try to find a win-win solution, we are going to lose. Because there will be losses, inevitable losses.

But also we should realize that voting for politicians who want to preserve status quo, which is also natural because older people vote in much bigger numbers, so that’s why they tend to elect people, politicians, that are not happy to implement changes. So the gap between the demands of the market, forced by technological revolution, and the expectations of baby boomers, for instance, this gap keeps growing.

I don’t have an answer, but I think that, going back to humans and machines, you have to ask the right question. It seems to me that we have been betraying the fundamental rules of capitalism. We want only benefits without risks – and risks means that some people, some groups, some even corporations, God forbid, countries, will be losing, at least temporarily.

But this is a competition. This is an open market. And trying to use benefits of the open market and the competition that created these beautiful technologies, on one side, and impose regulations and to stifle the initiative, on the other side, to slow down the progress. It’s cognitive dissonance.

KRISTOL: It seems to me you’re more worried about attempts to avoid technological progress or slow it down, from the point of view of the West and the point of view of the dynamism of our economy and of our country. I mean, you are on the side of it’s going to happen anyway and it’s much better to embrace it?

KASPAROV: I think it’s counterproductive to stand in the way of progress, because that’s what always made the free world, what you call “the West” – although it may include Japan and South Korea, Australia. So the free world, [is] an engine of progress.

And now we try, we try to rest on our laurels. “Let’s get only benefits; but we need protection.” Look, you cannot have both, because competition that will create new technologies and new machines – I mean, something that we have even don’t know yet, but some great things that will change our lives, that will be the result of us taking huge risks.

That’s one of the reasons I am a big supporter of the idea of renewing space exploration. It’s very important because we don’t know what we will find there, and that’s exactly the argument in favor, not against it.

And what we know from history is that any exploration, any physical expansion, always brought us side effects, other benefits that we didn’t expect. Like the space race in the ‘60s brought us internet and GPS and many other things. So it’s very important that we’ll look at exploration, at our ongoing adventure to open new frontiers as a vital part of the success of the free market.

KRISTOL: How worried are you that democracies – because old people vote and established interests have more power to protect themselves – are disadvantaged in this race against the kind of authoritarian-but-quasi-capitalist or progressive China-type, let’s say, system? Or is that –

I mean, the normal argument would have been, in the past, I think, “Well, free markets will be better because we have competition, we’re free, and the government is not good at directing these things.” But, I think, more recently some people argue more the opposite, right? That China can throw a huge amount of money into this and they don’t have political interest groups insisting that, you know, the automation slow down and so forth. Do you think this ends up helping the free world or is a challenge to the free world?

KASPAROV: I mean, first of all, we have yet to see any revolutionary product developed in the un-free world. It has not happened yet. So that’s why it is all theoretical. So everything that we use today has been created in the free world, because it’s –

Breakthrough technology, it’s a challenge of the authority; it’s a challenge of the past. It’s a free mind. And it’s a huge risk. Before you get Steve Jobs, you maybe had 999 failures. Before you come up with Google, you had an equal number of – hundreds and hundreds of failures. And central planning, the state controlled economies, they tend to choose the winners. So if they invest in 100 enterprises, they want to know which one will come up with Apple products or with Google or with Netflix or you name it. It’s impossible. So it’s a vibrant air of competition that central-planning economies lack.

Now, having said that, we still should remember that China – and only China, actually – will have a competitive advantage over the United States due to two factors. One is, when you look at internet, when you look at computers, many things in this industry will be done by processing data. And sheer numbers are on the Chinese side. They will have access to massive data that is much bigger because you have 700 million connected to the internet, 700 million people. So they can generate more data, and it gives them an edge.

I still think it’s not enough to overcome the traditional problems with Chinese society: the respect to your teachers, to the elders. So, it’s yet to see how China moves from copy-paste to inventions. It may happen; but, again, I don’t think it is automatic and big numbers will solve it.

Now another advantage is, that probably reflects the different nature of the regimes. When you look at biotech, genetics, no matter how much research you make with all the computers at your disposal, it’s still about experiments. And we understand that in China, they don’t care. So they can experiment, they can, throw X number of people – thousands, maybe even hundreds of thousands, down the drain if they think it will help them to come up with a vaccine or just a new drug. In the free world, it’s impossible.

But while I understand that there’s no way – and thank God we cannot, actually, replicate the Chinese or communist way or totalitarian way of treating people and solving scientific problems – we have to also to remove the fetters of the regulations. Because right now, when you look at the regulations of FDA and equal or similar institutions in Europe, it basically prevents you from doing anything.

And that’s one of the reasons why I insist that space exploration is important. Because the moment you start doing risky things like preparing your manned flight to Mars, your chances of success could be 50/50. And that’s why you can start applying risky medicine with 30 percent risk that is totally unacceptable in normal conditions. So, creating extreme situations is the way for the free world to meet the challenges that are being posed by this development in biogenetics and related sciences.

KRISTOL: And on the sort of more Orwellian privacy, liberty concerns – again, there was a while when Orwell was the model, “technology will crush us all,” you know, it helps the government crush us. Then there was 1989, in the sense that technology was helping people rebel against totalitarianism, freeing people up, you know, fax machines and the internet. How does that play out do you think?

KASPAROV: I’ve been writing many articles on the issue of privacy and security. And I’ve been working lately with Avast, the largest producer of security products for individual customers. And so, I wanted to analyze this situation because it’s not an easy one. So, there’s always a balance. But before we move into the potential solutions, we just have to agree on some definitions.

First of all, I keep telling people that Google data collection is not KGB or Stasi. So they have to realize that the information that is available will always be collected. And also, it sounds odd to me that people who are willingly sharing all the data. So that’s the, “Oh, new iPhone!” Now just, you know, it could just recognize my picture. “Perfect, so let’s send it.” So, you did it. You pushed the button.

So the people keep sharing information. They’re even now willing to give keys from their houses to Amazon for convenience. So I cannot imagine people will be sacrificing convenience for security. So that’s amazing. Psychologically, people are ready to do it, but they keep complaining about data being collected. If it’s in open space, somebody will collect it.

Now, let me move to a second stage: Who’s collecting it? And I think that the government could do more by actually punishing the violators. It’s not about regulations, but it’s about guaranteeing that the information cannot be used to harm the customers, and severe punishment for those corporations that reneged on their obligations to protect privacy. It’s not an ideal solution; it’s a partial solution. But I think that trying to stop it, trying to start regulating the flow of information will not help. Because yeah, you can attack Google; you can attack Apple; you can attack Facebook. But how are you going to deal with Putin, Kim Jong Un, Chinese, Iran and mullahs?

We don’t live in a perfect world that is regulated by bureaucracy in Brussels or Washington, DC. So it’s – the world is imperfect. So if data is there, and if you try to block access to data or just to make it almost impossible for data to be collected here, it will be collected elsewhere.

And I, from my personal life experience, I grew up in a communist country, and I understand that the information collected in the United States potentially could be used against an individual customer, a U.S. citizen. But this citizen is protected by the law of the land. It’s highly unlikely that this information could totally ruin his or her life. Possible, but because numbers are so big. But there’s many layers of protection.

Now, the same information collected in Russia, in China, in Iran, in Turkey nowadays, I mean, could end up, physically, could end the life of an individual.

So that’s why, when I hear all these complaints at many conferences and blaming big corporations, while I still think that more can be done just by putting pressure on them to protect the data, but trying to stop it – again, it’s like with AI: It’s counterproductive. It’s happening anyway. And if you try to slow down the process, if you try to impose restrictions on research, the Putins of this world have more than enough money to buy this research. And it will end – and these new technologies, this data will end up in the wrong hands.

KRISTOL: So it sounds like you think that, on the whole, these developments are either friendly to liberty or at least neutral? I mean, they’re not unfriendly.

KASPAROV: I think it’s – overall, it’s friendly. But it’s – again, it’s not without challenges. Because we could see how these technologies have been used by Putin for fake news, for creating a whole new industry. And pushing his agenda in a way that Soviet authorities, the communist leaders, couldn’t even imagine.

But it doesn’t mean that we have to stop the progress; we just have to probably give more power to individuals. And, by the way, my advice to people who just are complaining about technology is being used for these purposes of distorting the truth and feeding people with some wrong data: you cannot solve all the problems, you cannot prevent all the hacking attempts, you cannot prevent your phone or your computer being poisoned by these crazy stories. But it’s like elementary hygiene. We do wash our hands. We do clean our teeth. It doesn’t solve all the problems. We still can get a virus. But we know this; we have to do it, and it will kill 80 or 90 percent of the potential viruses. We still could be all vulnerable to other things. Sometimes we’ll go to the vaccination.

So how about you, an individual who is complaining about all these problems, doing these elementary things? It will not save you from all the problems, but when you look at the way people treat their phones or their computers, you could be amazed by the lack of concern, to just say they treat it with almost no respect.

Okay, fine, it’s a device. They keep complaining about problems, but what about you pushing a few buttons, buying a couple of products, installing them, just looking at Wikipedia to check some of the fake news stories. It will take a few minutes of your time, maybe 30 minutes, but it will prevent you from having so many problems that are inevitable if you just don’t follow the elementary rules.

KRISTOL: Yeah, that’s interesting. And what about, since you mentioned Putin – but let me just back up to one thing: How much does the artificial intelligence help in understanding politics? I mean, that’s an open system, right?  And it seems to have inflection points and tipping points and so forth, which are hard to predict, I suppose.

But, I mean, do you feel like 20 years from now we’ll be having a discussion about international politics, and it’ll be fundamentally different because of augmented intelligence and computer, machine learning, or not, really?

KASPAROV: I think it’s still very much on the side of human nature, and human nature based on many unpredictable factors. Because, at the end of the day, even the new machines, the Type B machines, they go through millions and millions of games, whether it’s Go or chess, and they look for patterns. They have to find patterns. And based on these patterns they build the evaluation scale. And based on this evaluation scale, they will try to offer recommendations.

Now, with humans, how can I identify patterns? Yes, you can identify some patterns. But as we saw on the last election cycle in this country, I mean, all bets were off. So there’s still many things that cannot be easily predicted. And I think it’s if we eventually have AI as a factor in politics, it will still [need] collaboration with humans, because you will need to make some adjustments to teach the machine about inconsistency of human behavior. Because many things could be done just under the influence of what we call momentum. How can you explain momentum for the machine? Because it could be momentum today, and in a slightly different situation it will be exactly the opposite.

KRISTOL: Yeah, I was going to ask about that. Machines don’t, so far as you can tell, can’t capture that quite – the sense of –

KASPAROV:  No, but it’s – I was speaking after the special elections of Obama. It’s just another elections. I don’t think that you can actually teach a machine about the psychological importance of elections because it looks at statistics, Democrats are still you know, just you know, short of two votes, if you consider that Pence is the breaking vote in the Senate. It’s just one election. It’s just – if you look at an overall trend, it doesn’t have the size of difference. When you start adding psychological factors, it could be a game-changer.

But how can you actually bring in this psychological factor? So, I don’t know. I still think that it will be very difficult for machines to understand this and actually to incorporate these very tiny fluctuations in human mood, in the way society moves just one pole to another.

KRISTOL: Right. And I do think, don’t you think, inflection points and tipping points, that seems to be something that it’s hard artificially to figure out? That these things – things go this, this, this, and suddenly –

KASPAROV: But again, we don’t know. We’re talking about so many people being engaged. This is not old-fashioned politics with few countries and two, three, four, five big leaders making decisions. Now you’re talking about millions and millions of people participating, small or big. But still, they can engage in the game because of Twitter, because of Facebook, because of all these modern means of communication.

And how can you evaluate this as the mood of this mass? Because it’s just a tiny shift could actually change everything because the mass is so big. And the opportunities for very ordinary people to influence the process – again, it’s still small, but you could keep adding small, small, small, small, and one million of small comments could create an opinion more powerful than one that’s coming from the Oval Office.

KRISTOL: So our fate is still in our hands, despite –

KASPAROV: Absolutely.

III. Technology and Education (39:57 – 59:54)

KRISTOL: You mentioned different generations having different attitudes towards technology, which is understandable, obviously. What about young, really young people?  What about people who are being educated, whether elementary and secondary education, or, I suppose, higher education as well? Are we doing a decent job of preparing them for the world they’re going to live in? Are we doing a decent job of using technology to improve their education? What’s your sense of that?

KASPAROV: Speaking about the big picture – so, I think we missed education. It’s not just technology, society as a whole, politics, economy, but education is probably the most important one, because it’s lagging behind. It’s quite amazing that the kids now that have been introduced to technology from just you know, from the toddler’s age. My son is two and a half, and he’s already quite savvy by just pushing the buttons, you know, swiping his finger. My daughter is 11 and she’s far more sophisticated at – I mean I’m not talking about myself, just me and my wife – but I believe that my son will be even more advanced than my daughter. So these nine years difference, will actually benefit him because he’s introduced to this world even at earliest age.

Now, one of the problems is that when we try to analyze the effect of technology, of modern technology on just all different layers of our society, we somehow separate education from that. Just imagine, just an experiment, if you have a person taken by a time machine from the end of the 19th century to these days. This person could be shocked by all these technological developments, and the only thing he or she can recognize will be the classroom. It’s the same old-fashioned classroom going all the way back to, I don’t know, University of Bologna in 12th, 13th century, with the teacher in the center, acting as if nothing happened, acting as if information still travels one way. And the teacher in the classroom is the center of authority and knowledge.  Absolutely not.

The kids, the 4th or 5th grader in the classroom, they can accumulate more knowledge, they learn more, by swiping their fingers a few times on the computer screen than this teacher for their entire life. I’m not trying to undermine the importance of the teacher in the classroom, but it’s – we live in a world where information travels both ways. It’s interactive. And we just don’t want to recognize that. And the education system is way, way behind.

Because what is education? I mean, why do we need kids to go to school? Because we want them to learn from our experience and we want to prepare them for their future life. Now, do we recognize, are we ready to recognize the fact that most of the lucrative jobs this kid – let’s talk about my 11-year-old daughter. So, just in 12, 13 years, she will be on the market. I don’t know what jobs might be created by that time. Because, if you look at the most lucrative and interesting jobs today – drone pilots, 3D printer engineer, social-media manager – these jobs didn’t exist 50 years ago.

So, anything that we do in the schools should somehow be connected to the future of these kids and to make sure that they will not be lost totally in this environment. Because all they learned prior to the moment of the beginning of their adult life just was a waste or just totally irrelevant and redundant.

And I think we have to start looking for this connection between education and the future impact that our children and our grandchildren can make for the society. And technology should play a fundamental role. Because it’s – it has the same effect: it will make many jobs – we’re talking about traditional teaching jobs – redundant. But it will create new jobs. And trying to cling to this old, traditional way of teaching to the rules imposed by teachers’ unions, it’s basically destroying the future of these kids. Because they will enter the world without knowledge necessary for them to be successful.

KRISTOL: But you don’t really buy the argument that computers and the iPhones destroy everyone’s ability to read anything longer than two pages, and they lose their attention span, and they don’t understand that history is important because everything is contemporary and current, and all these kinds of popular arguments today?

KASPAROV: I don’t think we have to mix change with destruction. Change, yes. Negative change? To some extent, yes, there’s negative change; though it depends very much on the education of families. So, our daughter is a voracious reader. She still spends time with her computer, but she loves reading books, I mean, physical books. She loves the smell of the fresh-painted pages.

And while I could see the negatives, but I could see also the benefits. Because just look at these kids, going back to these kids from early age, from age two, three, four, to 10, 11, 12, 15: they could do more with this new technology in a few minutes, in 30 minutes, than we could have done 25, 30, or even 20 years ago, spending hours and hours if not days and weeks trying to program certain things.

It’s give and take. So, it’s not one-sided. It has negatives, but I always prefer to look at the positive side. Because we could actually think of these computers helping kids to learn more, faster. And to find their – the right spot for them in the future.

It’s like in chess. Today, you have young grandmasters. And many of the young grandmasters, they reach that level earlier than Bobby Fischer, for instance, who became a top-10 player by the age of probably 15, 16, but he had to learn everything for himself by just reading books and playing games.

Today, they can just look at the computer, Fischer games, Karpov games, Kasparov games. And they have more knowledge about the game of chess by age 12 than Bobby Fischer had for his entire life. It doesn’t make them better players than Bobby Fisher, but it changes the way they approach the game of chess.

While I’m still trying, when I’m working with the rising stars, the talented kids, the young grandmasters, to move them, just slightly move them in the right direction by explaining that it’s not just about machines, not about calculation, you have to understand things. But I do understand also that it’s the way they look at these things; it’s the different lenses. And that’s generational. And you can try to adjust it, but calling it destruction, calling it’s the end of our game and crying about it, it’s just counter-productive. It doesn’t help.

KRISTOL: And it does seem to me the individual opportunities, in the sense that everyone goes at his own pace, people can study different things they’re interested in, is hugely good, can be hugely good for people learning. I mean, if you get excited by something, you can now study it.

KASPAROV: It is a bigger competition because now you have a chance for a kid from Africa just to go through MIT courses.

So, by the way, it will create more competition. Yes, it will put more pressure on kids from developed countries – but that’s why we need to create the industries. That’s why I go back to space exploration. We have to take more of the risky endeavors that we abandoned because they were too risky.

But it is – again, I could see this paradox of people who are just so much concerned about social equality and other important things. They are now – just the opposing technology that brings hundreds of millions if not billions of people from the undeveloped world into the same equation. So, giving them opportunity. Because my experience, mainly from chess, tells me that talent is everywhere. It’s not about some countries being, you know, more fertile for chess and some less; it’s about the opportunity. And I believe it’s not just for chess: it’s for everything.

So talent does exist. So, how to find it? To provide more opportunities. Technology provides more opportunities. And that’s why I think it is something we have to embrace rather than resist.

KRISTOL: And I suppose the sad thing about American politics today, just to end on a note about politics, is you can argue neither party is really interested in embracing opportunity, embracing risk. You have a left wing which is concerned much more about security and equality. A populist right that is scared of change and of technology, in some ways. And who is really advocating for the kind of education you’re talking about, and the kind of space exploration you’re talking about, and the kind of attitude towards risk and the future that you’re talking about?

KASPAROV: But it’s not just about education. I think it’s about idea; it’s about spirit.

Going back to my childhood: so, we knew that there were two different worlds divided by Berlin Wall, by Iron Curtain. And America represented something magnificent. So it was an idea: of freedom, of free market, of people being free to create, to make their own choices. And all of this engagement of the Cold War, it [was] still a clash of two approaches. And we knew what America did stand for.

I think that one of the problems today is that people in the Middle East, people in Asia, people in Russia, they don’t know where American stands [for]. What does America stand for? Yeah, if you see an American soldier, just soldiers, landing in some part of the world, it’s just a projection of force, but it’s not backed by ideas.  And it’s quite tragic because America is still the engine of world economy and innovations. And there’s a lot to be proud of.

And it’s not – having Silicon Valley, having all of these resources, it’s not that difficult; it’s not rocket science to come up with a concept that will put America again on the driving seat. But right now it seems to me both political parties here, they are just totally lost in these petty battles, domestic battles, and they don’t understand that America’s greatness is very much, is a reflection of how America is viewed by the rest of the world and why it was always the land of [the] free where people wanted to travel, to stay, to work here.

And it seems to me that this, all of the calls, you know, for or against immigration, they are based on local calculations, not on the concern about restoring American greatness.

KRISTOL: That is a good note to end on, restoring America’s greatness.  And we will continue this conversation in the future.

Garry Kasparov, thank you so much for joining me today.

And thank you for joining us on CONVERSATIONS.

[END]