Peter Thiel on the Global Economy and the State of Our Technology

May 22, 2016 (Episode 57)

Table of Contents

I: China and Innovation 0:15 – 20:38
II: Is Innovation Slowing? 20:3834:51
III: On the Need to Take Risks 34:51 – 57:38
IV: Artificial Intelligence 57:38 – 1:23:23

I: China and Innovation (0:15 – 20:38)

KRISTOL: Welcome back to CONVERSATIONS. I’m Bill Kristol. And I’m very pleased to have with me again Peter Thiel, innovator, entrepreneur, businessman, thinker on many topics – imaginative and unconventional thinker on politics and economics and society. Thank you, Peter, for taking the time. Good to have you.

We were just discussing before this that you’re back from China – this is now May of 2016. You had interesting things to say about that so I thought we’d just begin – it’s such a big issue in terms of the international economy, the future, the world, politically and economically. What struck you in China?

THIEL: Well, it’s a combination of incredible determination and incredible hard work of the people. I taught a course at Tsinghua University, one of the elite universities in China with incredible interest.

KRISTOL: Your book was a huge seller in China.

THIEL: It did very well throughout the world – Zero to One book – but probably sold more copies in China than the rest of the world combined. It was an incredibly intense interest in technology, how to do new things. And then at the same time, there’s also this very palpable sense that the model that’s worked so well for the last 30, 35, 40 years of globalization, of growth through trade, extensive growth copying things, is reaching some kind of a natural limit.

And so I think there is this very challenging transition China is going through in terms of finding a new growth model. It’s not clear everybody can become an entrepreneur or anything like that, even though you have Communist party slogans of mass entrepreneurship and things that sound slightly oxymoronic in that sort of vein.

KRISTOL: Well, what did you find among, especially, the students? Confidence in the future? Worries about the future? The desire to change the system, or not?

THIEL: I think it’s always – there always are things you can talk about and you cannot talk about in that sort of a place. But there’s incredible drive to succeed. And then and there’s always a question how does one interpret interest in start-ups and entrepreneurship and on the surface, it always seems to be a very optimistic, very ambitious, and then just beneath the surface, it often also goes to a lot of anxieties around how tracked careers, how working in a large state-owned enterprise, or in sort of a government functionary isn’t quite what it used to be, or may not work as well in the next 20 or 30 years as it has in the last 20 or 30.

I think, you know, I think we often have this with these booms or bubbles where on the surface it’s extreme optimism. In the 90s, we had extreme optimism about the new economy in the US, and then just beneath the surface, it was this sense that the old economy was no longer working, and there was no future in the old economy. And so I sort of have been wondering if there’s something like that going on in China today as well. Again, it’s always hard to – it’s a vast country, 1.3 billion or more people. So it’s hard to really generalize about something like that from just talking to a few students in an elite university.

KRISTOL: Do the students think – and I guess what’s your own judgment of this, too? Are they fascinated by Zero to One because they think, “We could do that in China, too,” or is it “Gee, they can do it in the US, but we can’t do it in China”?

THIEL: I think it’s interesting precisely because that’s sort of the cusp of the debate. So the two kinds of critiques my Zero to One book got would be, on the one hand, that, of course, China can innovate, and it’s wrong for me to suggest otherwise – which I didn’t really do, but there is sort of a sense that China gets linked with globalization. So one critique is we can innovate just as much as the West, and then the other critique of it is, well, zero-to-one businesses are good for America and other countries, we don’t need zero-to-one business, we can just copy things that work.

You can these two diametrically opposed critiques. We don’t need to do this, or we’re already doing it. That leads me to think the truth is somewhere right in between. That there is a sense they haven’t needed to do it. They haven’t needed to innovate for a long time. This, of course, is the good part of globalization. You can copy things that work if you’re a very poor, very underdeveloped country. There’s a lot of room for copying. And at some point –

KRISTOL: Huge increase in living standards.

THIEL: And at some point you run out of things to copy. And there’s a sense that this is what happened. This was the arc of Japan. Sort of the Asian exemplar in some ways, starting with the Meiji Restoration in 1870s and then you know, and then all the way through the 1970s, 1980s was incredibly dynamic. In many ways, it wasn’t a strictly capitalist model, but it somehow just worked. Then, you sort of hit a wall in the 80s where they more or less caught up, there was nothing left to copy, and it somehow failed to move beyond that. And then you get the question, when does China hit a wall like Japan? Very similar model. Export-oriented, current-account surpluses. When do they hit a wall?

If you look at it on per capita GDP, you would say it’s roughly where Japan was 1960, so you maybe have 20 years left to go on the copying. So if you look at per capita GDP you’d say China’s still – it’s 1/7, 1/8 of the US. Japan got up to about 2/3 or 3/4 before they hit the wall. You have a long way to go.

Then, if you look at it from trade flows, where in many goods China’s manufacturing half the stuff that getting made in the world in that category, and you sort of wonder how much growth is there really left doing this. You end up with this question, whether maybe this model worked really well for Japan when it was the first country doing it, and it works much less well for China at a much bigger scale. Maybe they’re hitting this wall that Japan hit in the late 80s today in 2016.

KRISTOL: One of the most interesting things – I think this is in the book and sort of other things you’ve written and our previous conversation – I think we discussed this. People tend to muddy together globalization and innovation, and I think you make the argument that they’re quite different actually.

THIEL: I always try to stress the difference where I always draw globalization on an X axis – it’s copying things that work, it’s going from end to end. It’s horizontal, extensive growth. And then I always draw technology or innovation on a Y axis – going from zero to one, intensive, vertical progress, doing new things. Doing new things versus copying things that work. And there certainly is a sense that in a successful 21st century, we want to have both. More globalization and more technology. But then there are probably some tradeoffs in terms of where the stress gets placed. If you look at the history of the last 200 years, we’ve had eras of technology and of globalization, of one or the other.

The 19th century was an era of both – tremendous globalization and tremendous technologically progress from 1815, ending with the start of World War I in 1914, when globalization goes very much in reverse, technology continues at a breakneck pace. I would date 1971 when Kissinger goes to China as the year where globalization begins again in earnest. We have had 40-plus years now of breakneck globalization. But what I’ve argued is relatively more limited progress in technology, mostly centered on this narrow cone of progress around computers, software, Internet, and not so much in many other areas of technology. So the 20th century had a period of technology with less globalization, and then a period, a more recent period, of globalization with more limited technology.

It’s reflected in some ways in the different ways we talk about our worlds. So in 1965, when you had technology but no globalization, you would have described the world geo-politically as the First World and the Third World. The First World was the part that was technologically progressing. The Third World was that part that was sort of permanently screwed up.

Today, the dichotomy would be between the developed and developing worlds, which is a convergence, homogenization, globalization theory of history. The developing world is converging with the developed world and becoming more and more alike. So this is a pro-globalization dichotomy, but at the same time, it’s also an anti-technological dichotomy, because when we say we living in the United States or Western Europe or Japan are living in the so-called developed world, we’re saying implicitly we’re living in that part of the world where nothing new is going to happen. Where things are done, finished, and exhausted.

I feel that’s always a little bit too pessimistic. I think, in theory, we should have both globalization and technology. In practice, we certainly have choices that we make. On an individual level, do we work in ways that are more globalization-oriented, more technology-oriented? It’s possible that over the last 40 years there were so many gains from globalization that it was natural for talented people in our societies to work in those, in industries that were linked to globalization. And perhaps those gains are a little bit more hard to come by now. And maybe it makes sense to rebalance towards technology.

If you think of it geographically within the US, you could say that New York City is the city that was linked to finance, finance is the industry that’s linked to globalization simply because it’s about the global movement of capital. It’s something that’s very easy to move around the world so somehow the economy and future was centered on New York City from, say, 1982 to 2007, that quarter century, was the period when people really believed in globalization.

The sort of strange shift from New York City to Silicon Valley I see is the shift where maybe we can get more out of technology, it’s harder to make progress on globalization. I still spend some time in both New York City and mostly Silicon Valley. It’s always striking the extreme contrast. Optimism in Silicon Valley and the very deep pessimism that I feel always permeates New York, where there’s this model that’s not quite working anymore. That’s why you see New York City – and then I think, remarkably, that’s what people believe in China at this point, which again as a country is geared to this to a degree greater than perhaps anywhere else in the world.

KRISTOL: And how much of – go back to New York City, that’s so interesting. I guess there’s a real tradeoff in terms of the regime and the political arrangements between the two. An arrangement that might work for just massive increases in globalization and prosperity may not work to encourage innovation. If you’re hitting a wall, so to speak, and if the gains aren’t coming, it seems, anymore from globalization, you could have political crises and other things.

THIEL: Well, certainly, we have – number of things to say here. Certainly, I think growth can happen from some combination of globalization and technology, and we’ll have problems politically if there’s not enough growth. I think there is some level in which the combination of globalization and technology hasn’t been delivering as many goods as it should.

KRISTOL: In China, I’m thinking, to use – how much of a crisis do they face? Before we get back to us. Is it hard to tell?

THIEL: I think with respect to China, I think that globalization has been such an enormous driver. It seems to me hard to reorient it toward technology simply. There’s so many – you have the Foxconn factory where people are manufacturing the iPhones, which again I think more is globalization than technology. 1.3 million people employed there.

Maybe that keeps going, maybe that gets redirected in some other way, but it’s the transition I think is very far from trivial. It was a very big problem even in Japan. Arguably, it was globalization-oriented, tried to shift towards something more innovative, and never quite cracked that open.

KRISTOL: So China has big challenges ahead of it?

THIEL: The bullish case is always that if you just look at per capita incomes, it would seem like there’s still tremendous room left for convergence, but my own sense for it is it’s possible we’re later in the globalization game than people think.

There was the – part of the 2012 People’s Congress was sort of a statement and the – I believe it was Liu He, who is sort of the chief economic advisor to President Xi, who included in the 2012 statement that the globalization tailwind that had helped China for 30-plus years was abating. And that you’d have to really think about how to reorient much of the economy, maybe towards internal consumption, maybe towards innovation, but away from a lot of the things that are working. And then a lot of the power structure’s linked to things that are working. The subsidies, the state-owned enterprises that export goods and that employ a lot of people. There’s a question of how easy it is to reorient that.

KRISTOL: And I suppose that is where we presumably have some advantage as a free country, as a free society, you can have a Silicon Valley, which is presumably harder in a place like China.

THIEL: I always worry that we’re not as free a country as would be desirable and that there are, certainly with respect to innovation, there are many areas where it’s effectively been outlawed. Not in the area of computers, which are still quite unregulated on the whole. There has been a lot of progress. I think in the US there definitely are large elements of our economy that have been linked to this globalization story as well, and if I had to do sort of a long-short version, I would try to be skeptical of the parts that are linked to globalization.

If I was talking to a young person graduating from college, I would discourage them from working at a big-money center bank in New York, or for McKinsey, or the international global consulting firm, which is, again, a classic globalization career. Clinton Global Initiative, that sounds very dated at this point. That’s so 2005. World Economic Forum. All these things have sort of a dated kind of a feel. This is the way the future used to be.

KRISTOL: Was ’07, ’08 – will people look back and say, that was the breakpoint, where we did have a globalization of credit, which turned out to be not as soundly based as people hoped, right?

THIEL: That’s certainly the interpretation I would give of it. Which was that it was a peak on globalization, on the belief in the emerging markets. You had – one of the acronyms in 2000 was the BRIC countries – Brazil, Russia, India, China. And then it turned out that a lot of it involved investment of capital in these places that was misdirected, that there was a lot of corruption under the veneer of globalization, and maybe it wasn’t going to work as simply as people had thought.

I would argue since 2008 it’s not like the tide has really gone out. It’s still being held together. The global trade flows were growing two to three times the rate of GDP growth for 30 years before 2008, they’ve stayed roughly in line since then. So the tide hasn’t really gone out, but it’s stopped advancing. I think it’s been held together much more by political will and much less by widespread belief.

One miniature version of this global project has been the European Union project, where you could, say, in 2007/2008, people really believed it. So a German saver would naturally buy Greek government bonds. You got a higher interest rate in Greece, and of course, it would eventually converge to a German rate because everything was going to converge. Post-2008, there’s a lot more skepticism. There would be no natural way that money would flow from Northern to Southern Europe today. And so it’s only – to sort of keep the European system balanced has required massive state interventions to flow the money to buy the bonds in Southern Europe because the people themselves no longer believe it.

And so, and I think there’s sort of all these versions of it, where it’s being held together by will, and I wonder when the tide is really going to start going out in the years ahead.

KRISTOL: One does have the feeling, just looking at Europe and China in a different way, the political elites – and one can understand why they’re doing this and they’re not necessarily wrong to do it, in the case of Europe, I suppose – but sort of holding the thing together, but now it’s fighting against the – it’s holding something together that’s already, from which the benefits have already been derived, and now you’re beginning to see the downside as opposed to “let’s all ride up,” the 90s kind of we’re all just riding into a bright happy future together.

THIEL: I think the elites are right in that the alternative to globalization always seemed really bad. In Europe, it was two world wars, and so even if we have a sclerotic, corrupt democracy in Brussels that seems better than what happened between 1914 and 1945 in Europe. I think there is always this negative worry that the alternatives to globalization are all bad, but I think – even though I think that’s correct, that shouldn’t blind us to the possibility that a lot of what goes on under globalization may also have been bad.

If it is just a way for, you know, dictators in emerging market countries to abscond with money in Swiss bank accounts or if it’s a way for terrorists to sort of freely travel throughout the world and use our global communications and global transportation networks, if it’s a globalization of violence through terror. Things like that. There are all sorts of forms of it that can be bad. Even though globalization ending would be unmitigated disaster, we have to think – we should be much more – we should try to draw much tighter distinctions of what forms of globalization have been good and which ones have been bad and not just have this sort of Panglossian, World Economic Forum, Clinton Global Foundation, that if it has the world global, it must simply be good.

KRISTOL: That’s interesting. And also I think one could take the position that it might have been good, mostly, for X number of years or decades, and that’s nothing to be ashamed of. It was a reasonable – standard of living for hundreds of millions increased in China and India, and parts of Central and Eastern Europe that had historically been hugely unstable became reasonable stable and better off. That’s something that was good for four or five decades, it doesn’t mean you can simply keep applying that formula for four or five decades. That seems to be very much the view of the European Union and, in a weird way, I suppose, the Chinese Communist Party, too. We are just going to keep doing what we were doing, but the world doesn’t quite work that way, presumably.

THIEL: It may even be somewhat good. I tend to think if we had more trade agreements, this would generally be a good thing, but the incremental value of the next trade agreement may not be as high as the ones we already have, and so if we ask, where are we really going to get the drivers to take our civilization to the next level in the decades ahead, maybe it is not the same formula over and over again.

II: Is Innovation Slowing? (20:3834:51)

KRISTOL: Let’s go to that because you’ve been an articulate proponent and very much a contrarian proponent – although I think people are beginning to come around a bit – to the argument of the innovation slowdown and – not that it necessarily – that we’ve merged in our own mind, take globalization and innovation, and everything is better than it was, and then computers became a stand-in for all types of innovation, but in fact in many important areas, innovation has been less than we would have expected it to have been and is less than it could be. Is that right?

THIEL: Certainly, if we went back to, say, 1968, the late 60s, and you asked where would people have thought the world would be by the year 2000, 2015, 2016, it’s fallen way short. If you look at the Back to the Future movie – Back to the Future I was 1985 – they went back in time 30 years, and things had changed quite a bit from ’55 to ’85. Still a decent amount of changes. Back to the Future II went from ’85 to 2015, 30 years into the future. It was about a year, a little under a year ago. That was a world that was supposed to be radically different, and I think the actual day-to-day changes, outside of computers, would have been quite modest in those 30 years. I would argue since the 1970s.

And so you can sort of rattle off these different areas – whether it’s biotechnology, where there has been some progress, but it seems to have decelerated. Space travel, transportation, more generally. All kinds of ideas people had in the 50s and 60s about reforesting the deserts or underwater cities, or all kinds of things like this that at this point –

KRISTOL: New forms of energy.

THIEL: Fusion. New forms of nuclear power, that at this point all have this sort of retro future feel. The future the way it used to be. Star Trek feels very dated. These things feel very dated in their optimism about how much could be done. I do think there’s always, there are always different things that are going on. There is acceleration in some places. There’s stagnation. There’s increasing inequality. You have different themes, but I think in a lot of these debates, I would say 70 percent is probably stagnation, and if we focus too much on inequality or acceleration, we’re going to get a lot of the public policy debates wrong.

If you focus too much on acceleration – as Professor McAfee at MIT, The Second Machine Age – it’s runaway technological progress, and then it’s going to lead to more inequality, and so you need to deal – it’s both this really good thing, and it has some problems, and then that sort of pushes you in a certain set of policy directions of what to do. Or, you know, the even more optimistic ones are things like, things like Ray Kurzweil with “The Singularity is Near.” This accelerating future and all you need to do is sit back in your chair, eat some popcorn, and watch the movie of the future unfold. And then the kinds of policy debates that we end in this accelerating world tend to be, I find, incredibly ethically charged where it always ends up being good-versus-evil technology.

The technology’s so overpowering that the main risks are it’s going to destroy us all. So you have utopian forms and dystopian forms, and nothing in between. You have the worries in Silicon Valley about AI, or do you really want to live forever or is that just ethically bad? Whereas on the stagnation side, the antonym to good is not evil, the antonym is bad. The problem from my perspective is not so much that we have evil technologies that are going to destroy the world, but that we have a lot of bad technologies and bad science that simply doesn’t work.

Evil has more this ethical thing, and bad has more the sense of not working. There’s an ethical, moral version of it, it’s that people are lying about the science, they’re lying about the technology, and they’re saying that it’s really incredible when it’s often not quite living up to what it is. I find myself much more in this that stagnation is the general dynamic, and it’s reflected in the economic data where median wages have been stagnant for 40-plus years, the younger generation has reduced expectations from their parents, this very broad social, cultural indicators that suggest that things feel, they feel kind of stuck. You have a very different set of questions, what do you do about it? What’s gone wrong? How do we get out of it?

KRISTOL: How much of it is in our control, or are we just hitting natural limits? That would be a question, I suppose, right?

THIEL: I think to the extent you see it as a technological problem, or a scientific problem, one approach has been that you’re just hitting natural limits. This is a Tyler Cowen, The Great Stagnation, a recent book by Robert Gordon, another economist, The Rise and Fall of American Growth. This idea that there were some fairly simple inventions, the 19th, the early 20th century, that maybe the rate of innovation peaked as far back as the late 19th century where you had incredible numbers of breakthroughs and it’s just the nature of the world, of the universe that it’s much harder to find new things. The low-hanging fruit has been picked. And therefore we have to resign ourselves to far reduced expectations and sort of a more austere future.

KRISTOL: I just read somewhere a quote in an article – not the whole, it wasn’t an argument, just a statement – you only invent antibiotics once. I mean, massive gains in public health by a huge breakthrough like that, where half your population isn’t dying from infections basically and public health disasters and in infancy. You can’t replicate that.

THIEL: Those are the arguments on the nature side. I’m a little bit more partial to the culture side on this. That perhaps there never was any low-hanging fruit. The fruit was always at least intermediate height, and it looked pretty high up.

KRISTOL: They didn’t think at the time, “Hey, these are easy breakthroughs.”

THIEL: Certainly, we have a situation where, you know, that one-third of the people at age 85 suffer from dementia or Alzheimer’s, so if you could do something about that, that would probably be close to as big as a deal as antibiotics. We’re living in a world where people don’t even expect that to happen anymore, I think.

KRISTOL: So what are the core problems there you think? What is it? Education? Regulation? Is it government? Is it just the culture as a whole?

THIEL: All these questions – why this is happening? – why questions, I think, are always hard to answer, they’re always a little bit overdetermined. My sense is, to some extent, it is all of the above, but I think there is – I think there is a big hysteresis part to this where there is a way in which success begets success and then failure begets failure, where if you haven’t had any major successes in a number of decades, it does induce a certain amount of learned helplessness, and then it shifts the way science gets done or the way innovation gets done in to a sort of more bureaucratic, political structure where the people who get the research grants are more the politicians than the scientists. You’re rewarded for very small incremental progress, not for trying to take risks. It’s led over time to a more incrementalist, egalitarian, risk-adverse approach, which I think has not worked all that well.

KRISTOL: And so, how does one change that apart from telling people they should take risks?

THIEL: It’s not clear, it’s not clear how you change it on a cultural, political level. I’m always focused on the sort of very modest start-up version, which doesn’t actually work for everything. I think it’s sort of like – again, it’s a very imperfect solution. You can convince a small number of people to start a new business or to do something new. Convincing a much larger number of people to change things, I think, is sort of a much more challenging problem, and then you know, this gets to all these questions, can you reform government research structures? Can you reform NASA? Can you reform the universities? Can you reform even large corporations that in many ways are not much better than the universities or the governments? Maybe a little bit, but not as much as we often like to think.

KRISTOL: I suppose one way to think about it, and this is something you’ve thought a lot about – we do seem to have one area where we have had pretty massive breakthroughs, bigger than anticipated. The one case maybe where actually back to the future, 2015 to ’85, really is a genuine quantum leap, so to speak, and that is computers and related technologies, information, and so forth. Could one then just say, “Let’s go back and look at what happened there and what the structures were there and what incentives were there, and the culture was there, and why couldn’t we then try to make that more prevalent in other areas?” That would be sort of a simpleminded, but reasonable approach.

THIEL: That certainly would be my naive intuition. So it was relatively unregulated, which may be harder to do in energy or medicine or some of these other areas. It was, there were parts of it that were not that capital-intensive. So you could get started with relatively small amounts of capital. There is a question of is that true in other areas of science where that’s true? Are there places where if you need to have $50 million to test a new experimental design for a nuclear reactor, does that necessarily become this deeply political process where you’re at risk for the money going to precisely the wrong people every single time?

There was lower capital intensity. There was, it was less regulated, and then I also think there’s been this history of success where over time you’ve attracted very talented people who believe they can do things. And one shouldn’t understate how much that sort of belief is effectual. One of my friends was – years ago was looking to join a big bio-tech company, and sort of the pitch to PhDs coming out of graduate school was that we have a better softball team than the other biotech companies. Whether or not you discover something is so random, so unpredictable, that the only thing we can sort of control about your work environment is to tell you that you’re going to have a better softball team, that’s why you should join our firm.

There’s something around a lot of these areas where it felt that people were too much of a small cog in a giant machine, very hard to actually impact things, and there was sort of less of a sense of human agency. That’s where we always have to come back. How do we come back to a real sense of agency?

KRISTOL: It does seem like the safety and environmental regulations just are so massive – others I’m sure, too, are so massive, in the medical areas, the energy areas, transportation, and then just the political, you know, obstacles, the rent-seeking and the people who currently occupy things making it hard to make progress. Which I guess didn’t happen – of course, after the fact, as you say, everything is overdetermined. One could have thought in 1985 – it’s not like there was no one in the computer or information technology field, and they were presumably interested in blocking new entrants, but somehow it didn’t work that way.

THIEL: It’s a complicated history. We had the thalidomide disaster with the FDA so there are specific things that you can point to that went very wrong. I think today you would not get the polio vaccine approved. When it was first used, they dosed it wrong, and I think they gave – 10 or 15 people got polio accidentally. So today that would probably slow it down for another 20 years or something like that. I do wonder whether we become too risk-adverse in various ways.

Even this concept of risk is a very strange concept. One of the things you can do on Google is search for words and the frequency in which they occur in books over time. If you look at the word risk over the last 200 years, from 1800 to 2000, it’s sort of a very infrequent word. Very rare until about 1970. And then it goes up at an incredibly steep curve. And it becomes sort of this much more common titles – How to Manage Risk, How to Take Risks – and so the sort of the thought I’ve been wondering about is whether a lot of talk about risk is actually even counterproductive to risk. If you have a risk that if your kids are left unsupervised in the playground that they’ll be kidnapped, or if you have risk that this can go wrong, or that can go wrong, or a risk that somebody is going to die from some new medical treatment, that risk is actually this word that’s used to discourage people from doing anything. It’s more and more a frequent occurrence is a symptom of society where less and less good risk-taking is actually taking place.

III: On the Need to Take Risks (34:51 – 57:38)

KRISTOL: A good friend of mine is on the board of a financial institution says they spend more time – and I’m not sure this is bad, I’m just reporting it – they spend more time at their board meetings discussing the management of risk than actually making money. It’s minimizing the downside, and it’s not a stupid thing to be worried about, obviously, especially after 2007 and ’08, but at some point, you’re spending more time on the insurance, to insure that there is no disaster from the products that you’re making than the products that you’re making. I think diversification – you’ve written about this – the kind of degree to which venture capital and people think of in terms of diversify instead of –

THIEL: There’s all these ways where I wonder whether the focus on the processes of risk-minimization distracts you from the substance and ideas and figuring things out, of doing new things. And so I think definitely something like that seems to have been very much at work. I think that – let me see how to put this – one of things that’s true about risk is that it is this sort of very probabilistic way of thinking about the future where the future is dominated by chance, by fortune, and that’s sort of this all-powerful force that dominates everything. And I think it’s one of the questions that I think is very unclear is this in fact a deep truth about the universe or is it more about the abdication of our responsibilities?

As a venture capitalist, I’m always tempted – the temptation is always you look at a company and you say, “Don’t really know if it’s going to work or not, and I’m just going to invest a small amount and see what happens.” The temptation is to treat all these companies as lottery tickets, but once you treat them as lottery tickets, I’ve found you somehow psyched yourself into losing. You’ve already psyched yourself into writing too many checks a little bit too quickly, and you’re not actually making a statement about the inherent chanciness of the universe, you’re actually making much more a statement about your own laziness and your own unwillingness to think things through. I do wonder if there is something like that that gets obfuscated by this talk about risk where it always sounds like it’s a statement about the larger world but it may really be more a statement about the failure of our ability to think things through. Maybe it is hard to precisely model these things out or something like that.

In the case of start-ups and very innovative businesses, it’s always quite unclear to me how you even talk about risk or probability. And so you can talk about it if you can do an experiment many times and see what happens over and over again. So you did high frequency trading on Wall Street, that has a probabilistic character where you can probably model the risks very, very precisely. But, if you’re investing in a one-of-a-kind company, I don’t even know how you’d go about measuring what the risks are, how you would quantify it, and if you have a sample size of one, standard deviation’s infinite so in theory you can’t say anything about the risk. If Mark Zuckerberg started Facebook over again 1,000 times, how often would it work? We don’t get to run that experiment. And so somehow risk-orientated processes, they make sense in a context of large N – insurance, the context of large N. Millions of people driving cars so many have accidents, you can model it out. Life insurance policies, actuarial science, these are the large N sciences.

But I think there are a lot of very important that are small N, or even N equals one. Very small number. And if we’re too beholden to these risk-models, I think we just don’t do those kinds of things, and they may be very important.

KRISTOL: The large N things – the insurance schemes are important for maintaining the stability of society and taking care of people who are afflicted by bad luck and bad circumstances, but you can have a perfectly insured, perfectly running insurance scheme and never have any progress. I guess is the simpleminded way to put it.

THIEL: The way I would put it is if you had a perfectly running insurance scheme, it would probably collapse under its own weight eventually. And so if you think of our education system as largely an insurance policy – the universities are largely this insurance policy for upper middle class parents who are scared that their kids are going to fall through the ever bigger cracks in our society – it can sort of work. But are we really going to have a future in which everybody’s insured and isn’t that a recipe for disaster over time where nothing –

KRISTOL: Sort of the welfare state is a reasonable supplement to a growth economy.

THIEL: But it can’t work as the be-all and end-all for everybody. And I think in the same way, a super-low risk thing, there are parts of it where it can work, but it probably can’t work universally. And I don’t think it can work remotely as extensively as it has been done. You know, in ways, I was guilty of this. I went to law school, sort of the low risk, seemingly low-risk thing to do from undergraduate in the late 1980s, early 90s, when I went to law school. It was this fairly low-risk way to get sort of an upper middle class career in a big law firm.

In retrospect, it’s turned out to be quite high risk for the people who did it because there were far too many people who did it. It worked well for a few years, then a lot of things just tend to go wrong, even for the people who come out of these very successful, tracked kind of places. If you wanted to date this, that’s maybe 1965 or 1970, if you’re graduating from college, there was actually a way you could do a low-risk, very successful career, so few people were doing it that if you – and there was a way you could arbitrage this risk successfully. Don’t think that’s been working terribly well. I think that’s been working less and less well for the last 30-40 years. Even as it’s been understood better and better.

KRISTOL: I mean, people forget – and you know much more about this than I do – the ’07/’08 crash, the instruments that are generally thought to have contributed to the crash at least and to have gotten it out of control, a lot of people didn’t know what they were holding, and one thing led to another and kind of knock-on effects. Precisely supposed to be lessening risk. It wasn’t quite right that many people in Wall Street thought, “Hey, let’s gamble.” Some did and they were – but really, it was the opposite. It was, if we can spread – holding one mortgage is a big gamble, holding 1/100th of a 100 mortgages is safer because three of them will go under, but, of course, you have the other 97. But then it turned out that – guess what? – the whole thing is rickety, is more rickety in a way than the very old-fashioned community bank holding the local mortgages, right?

THIEL: I would say it turned out to be more rickety because people thought it was too safe. In theory, diversified portfolios would be somewhat safer. But if everyone thinks they’re incredibly safe, then you might leverage those diversified portfolios up a great deal more than you usually would and you would take risks that a community bank would never have taken.

KRISTOL: That does seem to be a metaphor more broadly for our politics and society, I would say. The ’07 crash. You know what I mean?

THIEL: It’s the sort of like the probabilistic thing of this – probabilistic kind of a thinking has affected so many things. The very strange version in politics is the way that poll-taking has become so unbelievably dominant. It’s like you can ball these statistical aggregates of people probabilistically as you get to 51 percent, you only need 53 percent, which is where Romney’s 47 percent comment came from. I need 53, the other side needs 53, don’t need any more. Then there’s this sort of probabilistic science of getting at that. It has had a certain amount of power. We shouldn’t deny that it has had a great deal of power, and then at the end of the day, it has some very severe limits where if everyone, if all the politicians are looking at the same polls, we’re just really electing Nate Silver’s president or something like that, which is sort of a strange change in Constitutional priorities.

KRISTOL: And everyone’s micro-targeting, and then as happened this year in 2016 on the Republican side at least, a guy blew through who did no micro-targeting. Everyone else was picking their lane to run in.

I’m always struck by this, the word someone mentioned to me the word lanes recently, and I hadn’t heard the term in about a month, but early in the primary process in late 2015, early 2016, that was all the talk among political pundits, and it wasn’t stupid. There’s the conservative lane, the evangelical lane, the moderate lane, this lane, governor lane.

It turned out the one guy who seemed to ignore all that and just decided “I’ve got a message, and I’m not right now leading, but I can be leading, and I can go up from 25 percent to 35 percent to 45 percent to 55 percent,” sort of did that. It’s sort of similar to – whatever its virtues and limitations – but to someone who didn’t accept the kind of “I’m going to diversify my political effort. Or narrowly target my political effort.”

THIEL: I think we’re still too close to 2016 to coherently analyze it. I’m sympathetic to that idea. On the other hand, for all the ways that Trump gets described as this sort of incredibly independent kind of non-politician, it’s still very striking how all the speeches just get started with recitations of polls. And so it’s again –

KRISTOL: That’s more of a snowball thing, don’t you think? How do you say that with businesses – getting people to, it’s legitimizing himself by, “Hey, I’m winning.”

THIEL: Maybe it’s another commentary how beholden we are to polls. If you’re the politician or aspiring president who talks more about polls than anyone else, that’s strangely advantageous, which seems very odd but somewhat true statement.

KRISTOL: In a democracy, people want to be on the winning side so I think he had a real instinct on that, too – there’s this sort of way in which you keep doubling down, so to speak, and people want to be with the winner. Who knows what will happen?

THIEL: These things can work in certain contexts for a while. Be winning as long as you’re winning. But there are, I suspect, all these other contexts where this sort of probabilistic approach has not been, has not been terribly helpful. Politics at least you can still – you can just do surveys, it’s sort of somehow related.

If you’re talking about starting a business and saying, “We’re going to just do this random walk, we’re going to do A/B testing of different types of products and see what people want, we’re going to have no opinions of our own and do all this sort of testing of customers.” The problem is the search space is just way too big. There are way too many things you can do, you don’t have enough time to go through this sort of statistical surveying and feedback mechanism. A lot of the things that have worked the best have been sort of paradoxically not so probabilistically beholden. If you look at Steve Jobs, Apple, where it was – you have something like the Isaacson book on Steve Jobs, where it sort of portrays him as this tyrannical boss who just yells at people all the time.

Even if all of that is true, it doesn’t seem to me to get at why did it work at all? Why did it inspire people? I think it was because there was a plan, you’re going to execute against it, and you could sort of pull off some really incredible things that you could never do if you just did everything through this sort of instantaneous feedback from different kinds of things. I think complex planning, complex coordination, these are the kinds of things that’s gotten very hard to do in this sort of probabilistic society. I describe to you a complicated plan, in a technological context, you’d think it was like a Rube Goldberg contraption. It’s just not going to work. Something is going to break. Because we think of every step as probabilistic. We think of the steps as more deterministic. You just have to get the different steps to work.

You can actually – you can have a Manhattan Project, there’s no reason you can’t do this, there’s no reason even the government can’t do this. It actually did in the 1940s. You can send a man to the moon with Apollo. You certainly should be able to do a website for the Affordable Care Act since that’s demonstrably a lesser, demonstrably inferior technology to Apollo or Manhattan.

KRISTOL: I suppose on the business side, if you survey, polled ahead of time, I mean, this has been famously said about Starbucks. Do you want to pay two or three times as much for a cup of coffee that may or may not even taste better than the coffee you get at your local diner, kind of bitter European taste or whatever? People would have said, no, presumably. Or in any case there wasn’t any huge incipient demand or whatever term an economist would use. Unmet demand for that. I’m not sure even if one polled ten years ago, if you would like to have a little phone in your pocket that could do this and this, people would go, maybe, maybe not. There is a way in which all of that thinking about economics does seem self-limiting and misleading in some ways.

THIEL: I’m always – It’s always this question whether it’s that good to always be looking at the people around you and getting feedback from them in different ways. There’s this very strange aspect in Silicon Valley where so many of the very successful entrepreneurs and innovators seem to be suffering from a mild form of Asperger’s or something like this. I always wonder whether this needs to be turned around into a critique of our society where if you don’t suffer Asperger’s, you get too distracted by the people around you. They tell you things, you listen to them, and somehow the wisdom of crowds is generally wrong.

The Malcolm Gladwell wisdom of crowds book, it always – there’s a very specific thesis that it has, which I believe is true. Then, there is the way the term always gets misused. The specific thesis is that the wisdom of crowds works if everybody in the crowd is thinking independently for themselves. So if we have a jar of marbles, and everybody guesses how many there are, then somehow the collective judgment ends up being better than most individuals judgments. But if you have a more common way, the wisdom of crowd works is through this sort of hyper mob-like behavior where I think you get a lot of irrationalities. You get the wisdom of crowds becomes the madness of crowds. It becomes a bubble in finance or something worse in politics when it goes very wrong. They’ve done these studies at Harvard Business School, which I often think of consisting of the opposite of Asperger-like people. People who are extremely socialized, extremely extroverted, have relatively few convictions of their own.

KRISTOL: But good work habits.

THIEL: Good work habits. You put them in a two-year hothouse environment in which they spend two years talking to one another trying to figure out what to do, which leads to this very dysfunctional wisdom of crowds dynamic where you will simply – because none of the other people have thought about it for themselves and have any independent ideas – and they’ve done studies on this where systematically the largest cohort at Harvard Business School goes into always the wrong thing.

1990 they all wanted to work for Michael Milken sort of one or two years before he went to jail. They were never that interested in tech except 1999, 2000 when they descended on Silicon Valley en masse and timed the end of the dot com bubble perfectly. And on and on. There is something about this that’s very tricky where probably a lot of innovation, creative thinking, doing things that matter generally depends on not being so beholden to the people you’re immediately around. Even though you get feedback, and the feedback is helpful. There are these cases where it can go very wrong.

KRISTOL: And I suppose it’s most helpful to think, and I would think in a static situation, if you’re dealing with a static universe, X number of people, X number of things you can do, then the way of getting the most feedback suits you best to deal with this situation, I suppose. But it doesn’t tell you what happens when there’s a breakthrough in innovation or some disruptive event.

THIEL: This depends to some extent on what your priors are. Like my prior is there is a lot more innovation that could happen so if the feedback mostly is a form of anti-theories – “can’t do that, that’s too bold, that doesn’t quite make sense.” In a world where a lot of innovation is still possible, this sort of horizontal feedback probably has a very bad dampening effect. If we’re in a world where in fact everything’s been discovered, everything big has been discovered, this sort of feedback would stop people from wasting their lives on some sort of quixotic quest of one sort or another. So it does depend some on what your priors are.

If I had to make a judgment on it, I think we are in a world where these feedback mechanisms have gotten far too powerful, where people are too easily swept up by these mob dynamics. I’m certainly not going to go on your TV show and blame this on the Internet in any way. But you have to ask whether there are ways some of these technologies have maybe even exacerbated some tendencies that were already there in our society. The phenomenon of political correctness, there are many ways to describe it, but certainly one is you have these incredibly powerful negative feedback effects that get brought to bear and have this very inhibiting character, and I don’t think it actually results in things being far more generative than they otherwise would be. It cuts off a lot of lines of inquiry, but I don’t think that means it opens up that many more.

KRISTOL: Final question on this topic, which is very interesting, I think. It seems to me that conservatives and libertarians – we’re both pretty sympathetic to those points of view. Hayek, one of the great libertarian heroes and justly so, and a great economist, The Fatal Conceit, you can’t have central planning. Which I think has been a very useful critique of big government, nanny state, welfare state, liberalism. Don’t you think it’s had the effect – just listening to you and thinking about this – on the one hand, it’s sort of conservatives have bought into a very probabilistic – or maybe that isn’t quite the right word – but such a hostility to planning, such a hostility to human conceit that almost spills over to a kind of fatalism about human innovation or agency almost?

THIEL: I think that is, I think there is sort of very strange way that some of these ideas developed. And certainly in the area of economics, in the 19th century, pre-Austrian classical economics, the thought was you could still measure things objectively. How many pounds of steel is this factory producing? How many tons of steel is this factory producing? How many cars per hour are the workers producing in this assembly line? There was that sort of intuition about objective value. And there’s been this shift towards making things more subjective when it all becomes sort of unmeasurable or there are too many variables to measure. I think that in some ways was started by Austrian economics, and I think by now at this point it really permeates all of it.

It’s not just the Right – the Obama Administration wouldn’t say that they can have strong substantive ideas of what to do. They can improve processes, but they would never actually think that you could actually build a very specific thing in a preplanned way. They don’t even believe in that anymore.

KRISTOL: They believe in nudging. Isn’t that the term? Such a limited aspiration for –

THIEL: It’s again feedback from things that are already working and we improve them a little bit so it’s all, you know, it’s all we’re all just going to climb up, go up the up-gradient and we may get stuck on a low-lying hill. Sometimes you have to step back and wonder where in the world do we find ourselves? Do we just simply go uphill? Do we end up on a low hill or do we really end up on Mount Everest? And we have all these hill-climbing theories, we have no sort of valley-crossing, mountain-climbing theories, we need more of those kinds of things.

I think the shift towards a subjective ways of measuring economics, it then ultimately leads to this way in which we can’t even coherently talk about progress. Are we actually progressing as a society? And then the answer becomes it’s just impossible to know. It’s just different, all these things are somewhat different, and maybe you’re living in a much smaller apartment in New York City than your parents or grandparents lived, but you have an iPhone with a really smooth flat surface so some kind of hedonic benefit from that. How do you trade that off versus the apartment that’s a quarter of the size of that of your grandparents?

And so I think it’s hard to know how to think about these things, but I do worry that the stress on the sort of subjective, hedonic economic measures are excuses we’re telling ourselves to hide the stagnation or decline from ourselves.

IV: Artificial Intelligence (57:38 – 1:23:23)

KRISTOL: Speaking of progress, one of the ways people hope for progress, obviously, is computers or more broadly – well, not broadly – but artificial intelligence, I guess, has been a focus lately. You’ve given this a lot of thought and seem to be closely involved in a lot of these efforts or watched them closely, and invested in some, I’m sure. What do you think? Are we at some tipping point? Is the world in 20 years going to look that different in terms of what machines can and can’t do?

THIEL: It’s very hard to say. You know, I’m sort of many somewhat conflicting thoughts on this, I don’t necessarily want to come down very strongly on one side or the other of these debates. I would say that certainly computers generally are an area where there’s been a lot progress so it’s maybe not unreasonable to maybe ask the question, how much more progress could there be? How many more ways could AI happen?

On the other hand, one of the things I don’t particularly like about artificial intelligence is it’s become such a buzzword. I think these buzzwords often always obscure more than they illuminate. One of the ways to see that it is a buzzword is to see how ambiguous it is. Artificial intelligence can mean both the next computing, the last computing that humans will ever build, and everything in between. So it has this sort of rather elastic meaning. When artificial intelligence means the next set of computers, it wort of pushes the conversation in somewhat more automation, replacing certain low-skill or medium-skill kinds of activities people are doing. When you talk about it as the last computing device where you’re building a mind that can outthink and outwit any human being, you end up with these very scary, somewhat political questions. Is it going to be friendly? Is it dangerous? And if something like that can be developed then maybe it will be on par with extraterrestrials landing on this planet where I think the first question would not be about what would this do to the unemployment rate? The first question is are they friendly or not? The first questions would be political.

If one – so I think that’s sort of a general framework. I would say that certainly the sort of bullish AI consensus that exists is that we’re making progress very quickly. There are no deep reasons why computers couldn’t do everything better than what humans do. And this may indeed happen in the next few decades. This would, of course, be an extraordinarily important and transformative set of changes. I’m certainly open to all these perspectives, but I also wonder whether there’s certainly parts of this that one could question.

If you had to be a little bit more critical of it, the two points of criticism would be, to first start with the history where people have been in some ways too optimistic about AI for quite some time. In 1970, there were people who said you’d get computers to understand language and everything humans could do within a decade. Same thing would have been said in 1980. We’ve been here before. So there’s been a history when this had turned out to be more difficult than people would have thought.

Then, of course, there’s always this sense of whether it’s maybe just a particular moment in time where at the peak of a technology cycle, the only thing we can worry about is technology that’s so good that it’s too fast and it changes things too radically. There was – in spring of 2000, there was this essay that got a huge readership in Silicon Valley by Bill Joy, one of the founders of Sun Microsystems. It was “The Future Does Not Need Us.” How runaway technology would get rid of people. And, so –

KRISTOL: I remember that well. That’s already 15 years ago.

THIEL: As a socio-cultural observation, a psycho-social observation, in spring of 2000, what we should have worried about was not whether the technology was going to work so well that it would be this runaway progress, but the real question was whether it was working at all? Were the business models working? And it turned out a lot of things didn’t work that well, and we had sort of a period when people went back to banking and back to consulting from Silicon Valley. B to C and B to B didn’t mean business to consumer and business to business, it meant back to consulting and back to banking. So anyway I do wonder whether the sort of mini-AI bubble that we’ve seen in the last few years is maybe symptomatic, that we’re at some local peak in optimism about how much Silicon Valley is doing and can do, all these sorts of things.

I think one way, one aspect of it that I think is somewhat disturbing is certainly that there’s this sort of probabilistic thinking that again creeps into the AI issue as well. If you meet someone in Silicon Valley who believes that AI is possible, it’s happening soon, and it’s potentially dangerous. These are three very widely held beliefs. The fourth thing that you can always push back on them – you don’t want to question those three beliefs – but the fourth one, that’s a very powerful one to question back is, well, you have no idea on how to build one that’s safe, and you couldn’t build one that’s safe. The way you’ve defined the problem that you’re going to build a superior mind that will be able to outthink you, you won’t be able to build it in a way that’s safe.

One of my colleagues was talking to one of the top AI researchers and sort of pushed him on this, and it was basically, “Professor, you obviously don’t believe any of your theories about AI because if you did you wouldn’t be publishing any of it on the Internet, because when the AI emerges, it will read about it on the Internet, and it will hide to not make you aware of how powerful it has become.” So there is something akin to that that’s implicit in all of this where if the sort of the optimistic case about technological possibility is true, there’s this sense of helplessness in terms of what people can actually do about it. It’s very much linked to this rather dystopian view, and this gets reflected in the Hollywood movies. I cannot think of a single movie from Hollywood about AI that doesn’t have a dark and rather disturbing undercurrent to it.

Again, I think it’s the sort of probabilistic reasoning that technologies are out of our control, there’s no human agency, we can’t actually know what we’re doing that gives it this very strange quality. Of course, there is a sense in which the term AI simply means that human intelligence isn’t up to the task so one other way of interpreting the AI boom is that on the surface, it is about extreme optimism about the potentialities of computer technology, and the beneath the surface, it is simultaneously, perhaps, a great deal of pessimism around the possibilities in other technologies that will be developed by humans, and deep pessimism about the possibility of what humans can do.

Sort of a man with a hammer sees a nail everywhere. One interpretation I have of the AI bubble is that again, in a strange way, it’s symptomatic of the technology stagnation thesis.

KRISTOL: That seems to fit with the notion that we’re developed societies and there’s not much more that we can do on our own. Deus ex machina kind of literally is going to come in and do what I don’t know, but you know. Discover a cure to cancer.

THIEL: Waiting on this AI to save us from all these things, and don’t know whether it will be friendly or not. It has this very strangely passive aspect where it’s somehow – there’s not enough room for human agency to my liking.

Certainly, one somewhat more basic point I always try to make is that it’s not at all obvious why the question about, let’s say, near-term AI – so not the way futuristic stuff but the next generation or the generation after that – why it should be seen as such an adversarial dynamic. We’re always talking about computers as substitutes for humans, and yet the reality is they are very different. They are – computers are able to do things in this incredible brute force way, humans are sometimes able to do things far more effectively, and yet there are sort of ways in which our minds are probably much simpler than we think. It’s probably wrong to think of a human mind as having hundreds of billions of neurons because that somehow codes for hundreds of billions of things in our mind. You’re a really smart person if you have 20,000 vocabulary words.

And so I think there is something about computers and humans where they’re deeply different, and I wonder whether the focus on AI has somehow obscured these differences. The mystery in some ways is why have we actually not built AI? And the conventional explanation in Silicon Valley is we haven’t built it because human minds are so complicated, you have hundred of billions of neurons, you’d need a computer with hardware, the kind we haven’t quite developed yet, and it’s just a matter of time, and we’ll get there. But you could make the same argument. You could say this cup here has close to Avogadro’s number of molecules or atoms in it and could never be modeled by a computer, but you don’t actually need to model every atom, you could just model the basic structure.

In a similar way, if there is such a reductionist theory of the mind were possible, it could perhaps have already been designed on 1970s- or 1980s-type hardware. I think there are some very mysterious questions like this that have not been fully thought through. My guess is that it’s just possible that there are these really big differences and that there are – and there’s a separate question if you can brute force a simulation of a human mind, and there are probably ways you could do it, and there are some limits to that, but they’re naturally complementary because they’re so different.

We normally need to be afraid of people who are just like us because those are the people we’re competing against. If you’re – globalization is scary because it means that you have very underpaid people who aren’t that different from us in other countries competing with people in the US and Western Europe. The computers are not – they’re complementary – they’re not really competing. They would be scary if we had a super-futuristic version where we had a robot that looked just like you, Bill Kristol, and we didn’t have to pay it any money in any context, and you’d be rightly alarmed by that.

KRISTOL: So would many other people.

THIEL: This is the common-sense intuition why people are scared of cloning, there’s a sort of bioethics cut on cloning, but the common-sense reason is that if you had 100 clones of yourself, they’d be competing with you, and you’re always competing with people who are like you, and so to the extent that computers are really different, that’s, I think, much more of a positive than a negative.

KRISTOL: When you say the human mind is simple, you’re saying not that it’s easy to replicate but the opposite, that there is something mysteriously simple about it so that brute force doesn’t replicate at least some of the things that it can do?

THIEL: Yes, my view is that it’s mysteriously simple, and it seems to be able to do very powerful things with relatively few components. Maybe there is some relatively simple algorithm that could replicate it, but it’s strange that we haven’t found that. But it’s perhaps not a problem of hardware, which is the naive standard view that we just need more hardware, and you’ll get it to work.

KRISTOL: When the computer beat Gary Kasparov in chess in ’97 or something like that, Charles Krauthammer, who knows a lot about chess and a fair amount about science too, wrote a cover story for us, which we titled, “Be Scared, Be Very Scared,” interesting reflection. I haven’t gone back and looked at it for a while, but I would say 20 years later, it doesn’t feel that scary. Clearly, a game like chess with a set number of pieces and the board and rules, and the brute force, so to speak, can just at some point surpass even the greatest player.

Most of life isn’t like that, I think, and one doesn’t have the sense that computers are defeating humans in the way that one might have thought 20 years ago, would now have moved from chess to others parts of life. Maybe I’m wrong about that –

THIEL: I think it’s an open question. It’s happened certainly less quickly than people would have predicted at various points. Certainly, you have day-trading, you have all these great trading algorithms where the computers are doing 2/3 of the stock-trading on the stock market. You have certain places where these things happen. On the other hand, things like language translation still feel extremely far away from that. If anything the way Google has improved its language translator is to find phrases in books and then see how the humans translated those books. It’s sort of has found efficient ways to leverage off of human translators, and it’s not actually any kind of systematic understanding on the level of the computer of what’s actually the meaning of the words. That kind of a thing still seems very far away.

KRISTOL: And day-trading in a way – I don’t know that much about it – but it seems like a good example because it’s not stock-picking, let alone venture-capital investing, which to my knowledge, computers haven’t –

THIEL: I’m hoping that venture capital is extremely far from being replaced by computers.

KRISTOL: The use of computers in those areas is to help people process data very quickly and do a lot of comparisons and stuff, but at the end of the day –

THIEL: The more hopeful view that I still have is that it is likely to be just a continuation of what’s been happening since the Industrial Revolution where mechanization, automation free people up from certain kinds of repetitive tasks and free people up to do other things. It can be scary if we’re living in a society where there are not other opportunities, where there’s not enough growth, where there are problems like that. But in and of itself freeing people up from the drudgery of repetitive task is probably a good thing.

KRISTOL: The transition can be very unpleasant, and of course, when one reads about it in the history books – I was just having this conversation with someone recently – you know, well, people went from agriculture from rural areas to the city, but they got through it, and London was a mess, and everyone writes – you read Ehrlich or accounts of Blake and the coalmines and early industrial London and other – more or less London – other industrial towns in England, and it’s terrible, but they get through the problem, and everyone needs up well off.

Of course, the getting-through takes 20 or 40 years and includes all these bad things happening and instability, and maybe we’re in such a moment here, you could argue, with all the anxieties of people losing jobs, but there’s also nostalgia. I’m sorry, if you are a coalminer and you’re out of work at age 52, it’s very tough and it’s hard to adjust and you have to move or you’ll never have quite as well-paying a job. It’s easy for me to sit here and say, “That’s just part of social progress,” that’s not a very palatable thing to say. On the other hand, we shouldn’t be too nostalgic to say it was a great thing that we had millions of people in coalmines, that’s not healthy. Or the assembly line.

All the nostalgia now. Trump’s getting the votes of quote the white working class, the people once working in steel mills, and that was great, or assembly lines. It seems to me an awful lot of literature was written in the 50s, 60s, and 70s about how assembly lines were dehumanizing and a cog in a machine and not a great – one wanted to get beyond that.

THIEL: Perhaps if we didn’t want Chinese manufacturing to become as powerful, we needed to automate the assembly lines even faster than we did in the US. If anything, we didn’t mechanize quickly enough in some of these industries. I would say that the sense of nostalgia that we have is a sense that there is a lot that we’re losing. The dilemma is that the things that we’re losing are very obvious. On the other end of the tunnel, there are many things that we will gain. We have every reason to think the things we will gain are much greater than the things we are losing. It’s sort of obvious what we’re losing, it’s not at all obvious what we’re going to gain on the other end. I think that would have been certainly true of the early 19th century when we were in the throes of the first Industrial Revolution. There are certainly aspects of it that are like that today.

Again, my worry with all these things is if anything, there’s not enough happening. If you take the biggest innovation that people are talking about now, it’s self-driving cars.

KRISTOL: Let’s talk about that. Is that really so big a deal?

THIEL: I think there are one or two million people who are employed as drivers. Maybe one and a half or two percent of the workforce, maybe, in that ballpark. Maybe it would increase efficiencies because you could get some work done in the car while you’re driving to the office, so there would be – Maybe it could lead to five percent increase in GDP in the whole economy. Maybe I’m underestimating it somehow. I think it would be a significant change, but it wouldn’t necessarily take, double our GDP or anything remotely like that. The fact that that’s sort of the most transformational change we can imagine is again, perhaps, is, to my way of thinking, strangely unambitious.

KRISTOL: You famously said, “They told us” – what’s your famous joke?

THIEL: They promised us flying cars, and all we got was 140 characters. So this is better an 140 characters, and it’s –

KRISTOL: But self-driving, really if you think about it – I haven’t really thought about it this way – self-driving cars, still cars going on the same roads and the same traffic, maybe a little less traffic because there are fewer –

THIEL: Maybe they can park themselves, you don’t have to look for parking.

KRISTOL: Not everyone will have to have a car. You’ll call like Uber, and the self-driving car will come get you so there’s less congestion a little bit. But it’s not flying cars. That in a way would have been a quantum leap.

THIEL: In theory, it could help congestion a lot. In theory, it could take a lot of pressure off parking spaces and things like that. And then at the same time, the fact that this is the technology that’s iconically the most radical that we can sort of concretely describe – it’s more than Twitter, it’s not quite vacation trips on the moon.

KRISTOL: Not to criticize Twitter, right?

THIEL: It’s a very good company.

KRISTOL: I just like Twitter, tweeting. This is what it’s come to in a way. There were huge breakthroughs, obviously real breakthroughs. Email and all that stuff is pretty amazing.

THIEL: Self-driving car would be, I would say, almost as big as the car itself. I would still say the original invention of the car was bigger than the self-driving car. If you had to give the rough qualitative –

KRISTOL: From horse and buggy to car is a jump in people being able to do – in lessening distance. A self-driving car certainly doesn’t lessen distance at all; it, of course, makes it easier. It is sort of like Uber replacing. This is funny, I was thinking the exact same thing myself not quite in this context – that it is the iconic next breakthrough, but it’s not that big of a breakthrough.

THIEL: There are – and then there are, of course – even a company like Uber where you have this major innovation, I often wonder whether it’s more symptomatic of the failure of certain political structures. The vision in the 50s and 60s was that you’d build very high-speed transport systems, which the San Francisco Bay area, where I live, were basically vetoed by local zoning ordinances, it’s sort of like this very second-best solution because you couldn’t build the much faster kinds of things that people thought were natural in the 50s or 60s to develop. It again, it’s a compensating device for sort of the dysfunction of our cities. Where there is not enough parking so you don’t want to drive your own car. You can never find a parking space. It sort of ends up being – then, the public transportation systems don’t work.

KRISTOL: Cabs are limited. Medallions so there is not market system for taxis.

THIEL: All of these things that are dysfunctional, but then we have this innovation to sort of ameliorate the dysfunction. But if the political systems worked better in our cities, this would be sort of, we might be doing some very, very different types of things.

KRISTOL: So you’re more worried still about the lack of real technological breakthroughs, if I can put it that way, than this notion that we’re on the cusp of all kinds of technologies, which will be dangerous to us and then out of our control.

THIEL: Far more worried about the lack of good technologies than the danger of evil. I feel as a venture capitalist, I see all these bad technologies, bad science where it’s things that just don’t work. The cool-sounding things. The general problem is not that they’re ethically problematic, it’s that they just don’t work.

The things that do work are often on a scale that’s incredibly modest. There is often a way in which humor is used to hide disturbing truths from ourselves. People made fun of technology in the early 20th century; the humor was to disguise how scary it was, how much it had changed, how drastic it was. Today, it’s like people throwing virtual cats at each other on the Internet or something like that. The humor, we make fun of technology to hide from ourselves the disturbing fact of how trivial it is, how small it is. People are still worried about what’s going on, but I think it has a very different feel from what it did 100 years ago.

KRISTOL: That’s really fascinating. Any last word of relative optimism, however? I mean, it is striking, we’re here having this conversation, we’re about to go to a more academic conference on the mastery of nature, sort of modern philosophy and it’s attempt to conquer nature and, in a way, the amazing success, but so much attention has been given to the downside of that. Was it a bad idea in the first place, or have we now created as much harm as good, or is it out of control, as we were saying earlier? In a way, you are taking very interesting and sort of contrarian view that maybe the problem is that we need to recapture a little of that early verve of the attempt to conquer nature for the relief of man’s estate.

THIEL: I don’t think we’ve lost it all together. It’s always, there’s always, there is – and there’s always a way in which the post-2008 malaise, the silver lining to this is that there is a sense that we need to do new things. We can’t just keep going the ways things were. There is a complacency that we had 10 years ago that you no longer quite can have today. People are far more open to this idea that perhaps there’s been not as much technical –

KRISTOL: Do you found that? When did you first introduce the idea –

THIEL: I wrote about it first in 2011, I started talking about it three or four years earlier. Certainly, even in 2011, people thought this was very crazy. At this point even in Silicon Valley, there are people who said, “I thought about it some more, and I think there is actually, there is a lot to this.” There’s more of an openness to this notion, and maybe that’s the first step in getting out of it. You have to realize that we’ve been wondering in a desert for 40 or more years now and not in an enchanted forest, and that’s how we find the first step to get out of it.

KRISTOL: Forty years is a good time to end the wandering in the desert.

THIEL: A little over 40, but it’s time to leave.

KRISTOL: Peter, thanks so much for taking the time to have this conversation. And thank you for joining us for CONVERSATIONS.