A conversation with James Barrat

Yesterday I reviewed Our Final Invention, an accessible and provocative book about the development of artificial intelligence (AI) and the various ways it might represent a threat to some or all of the human species, all other forms of life on Earth, and (astonishingly) potentially even the very substance of the planet we dwell on (!). I strongly recommend everyone read it.

Today, I’m please to present a discussion with author James Barrat about AI. Barrat is filmmaker, documentarian and speaker. He’s made movies for organizations like National Geographic, PBS, and the Discovery Channel.

You’re primarily known as a film-maker. What got you so into AI that you felt compelled to write a book about it?

Filmmaking was my introduction to Artificial Intelligence. Many years ago I made a film about AI and interviewed Ray Kurzweil, the inventor and now head of Google’s AI effort, Rodney Brooks, the premier roboticist of our time, and science fiction legend Arthur C. Clarke. Kurzweil and Brooks were casually optimistic about the time to come when we’d share our planet with smarter than human machines. They both thought it’d usher in an era of prosperity and in Kurzweil’s case, prolonged life. Clarke was less optimistic. He said something like this: Humans steer the future not because we’re the strongest or fastest creature but because we’re the smartest. Once machines are smarter than we are, they will steer the future rather than us. That idea stuck with me. I began interviewing AI makers about what could go wrong. There was plenty to run with.

Why write a book rather than make a film? Simply put, there’s too much information in a good discussion of AI for a film. A book is a much better medium for the subject.

On your website, you state that discussing AI is “the most important conversation of our time” – I’m convinced that you’re right, or at least there’s a strong likelihood you’re right. Can you share why you think it’s so important?

In the next decade or two we’ll create machines that are more intelligent than we are, perhaps thousands or millions of times more intelligent. Not emotionally smarter, but better and much faster at logical reasoning and doing the things that add up to what we call intelligence. A brief definition I like is the ability to achieve goals in a variety of novel environments and to learn. They won’t by default be friendly to us, but they’ll be much more powerful. We must learn to control them before we create them, which is a huge challenge. If we fail, and these machines are ambivalent towards us, we will not survive. My book explores these risks.

It’s an excellent book, and I think it’s better than Nick Bostrom’s Superintelligence as an accessible entry into the topic. Thank you for writing it. How does AI rank in your estimation relative to the threats such as nuclear war, human overpopulation, growing economic disparity, climate change, or destabilization of ecosystem “services”?

I think AI Risk is coming at us faster, and with more devastating potential consequences, than any other risk, including climate change. We’re fast-tracking the creation of machines thousands or millions of times more intelligent than we are, and we only have a vague sense about how to control them. It’s analogous to the birth of nuclear fission. In the 20s and 30s fission was thought of as a way to split the atom and get free energy. What happened instead was we weaponized it, and used it against other people. Then we as a species kept a gun pointed at our heads for fifty years with the nuclear arms race. Artificial Intelligence is a more sensitive technology than fission and requires more care. It’s the technology that invents technology. It’s in its honeymoon phase right now, when its creators shower us with useful and interesting products. But the weaponization is already happening – autonomous battlefield robots and drones are in the labs, and intelligent malware is on its way. We’re at the start of a very dangerous intelligence arms race.

Yeah, the discussion of Stuxnet in your book was sobering. I honestly think I’m going to start stockpiling food as a result of reading it. But there’s a huge disconnect between my reaction and the general populace. Why aren’t more people talking about it? The absence of a widespread public discussion about AI seems bizarre to me.

I wrote Our Final Invention to address the lack of public discourse about AI’s downside. I wrote it for a wide general audience with no technical background. I agree, we’re woefully blind to the risks of AI, but it will impact all of us, not just the techno-elite. Each person needs to get up to speed about AI’s promise and peril.

As I wrote in the book, I think movies have inoculated us from real concern about AI. We’ve seen how when pitted against AI and robots, humans win in the end. We’ve had a lot of movie fun with AI, so we think the peril is under control. But real life surprises us when it doesn’t work like Hollywood.

I thought Ex Machina addressed that reasonably well. It shows a reclusive Silicon Valley-type engineer building AI in his secret redoubt, but that “lone wolf” is kind of different from real-world economics.

Can you briefly discuss the motivations of the various academic, commercial, and military groups who are pursuing AI? Do we know who they all are? (If not, why would they be motivated to be secretive?)

A huge economic wind propels the development of AI, robotics, and automation. Economically speaking, this is the century of AI. Investment in AI has doubled in every year since 2009. McKinsey and Co, a management consulting group, anticipates AI and automation will create between 10 and 25 trillion dollars of value by 2025. That almost ensures it’ll be developed with haste, not stewardship. As for the military, they don’t have the best programmers, which have gone to Google, Amazon, Facebook, IBM, Baidu, and others, but they have deep pockets. They’ll develop dangerous weapons and intelligent malware.

And yes, there are a lot of stealth companies out there developing AI. Stealth companies operate in secret to protect intellectual property. Investor Peter Thiel just unveiled one of his stealth companies which, it turns out, helped the NSA spy on us and subvert the Constitution.

Sheesh. But they’re not close, right? I mean: How close do you think “we” (i.e., one of these groups) are to making a true artificial general intelligence (AGI)?

That’s a tough question, and no one can get it right. That said, several years ago I polled some AI makers, and the mean date was 2045. A small number said 2100, and a smaller number said never. I think the recent advances in neural networks have moved AGI closer. I agree with Ray Kurzweil – about 2030, but it might be as early as 2025.

AGI will likely be the product of machines that program better than humans. And shortly after we create AGI, or machines roughly as intelligent as humans, we’ll create superintelligent machines vastly more intelligent than we are.

2030 is 13 years from the moment we’re currently discussing this – about a third of my own age. That’s really, really soon, considering how little preparation has been laid down. If we knew aliens were going to land on Earth 13 years from now, I suspect we’d be frantically preparing.

So based on your research and interviews, would you care to venture how much time will elapse between the achievement of AGI and the evolution of artificial superintelligence (ASI?) A “hard takeoff” seems like a most chilling scenario. But is it realistic?

I think the achievement of AGI will require all the tools necessary for ASI. The concept of an intelligence explosion, as described by the late I.J. Good, is worth exploring here. Basically, the tools we use to create AGI will probably include self-programming software, and the end goal of self-programming software will be software that can develop artificial intelligence applications better than humans can. I wrote a whole section in the book about the economic incentives for creating better, more reliable, machine-produced software. It’s fairly easy to see this capacity growing out of neural networks and reinforcement learning systems.

Once you’ve got the basic ingredients for an intelligence explosion, AGI is a landmark, not a stepping stone. We will probably pass it at a thousand miles an hour.

That’s sobering. What’s happened that’s relevant to understanding AI since the 2013 publication of Our Final Invention?

I think everyone thinking and reading about AI was surprised by how quickly deep learning (advanced neural networks) have come to dominate the landscape, and how incredibly useful and powerful they’ve turned out to be. When I was writing Our Final Invention, AI-maker Ben Goertzel spoke about the need for a technical breakthrough in AI. Deep Learning seems to be it.

Are there any tangible actions that concerned citizens should consider taking to protect themselves? Or is it just a case of crossing our fingers and hoping for the best?

Because it impacts everyone, everyone should get up to speed about AI and its potential perils. If enough people are informed, they’ll force businesses to develop AI with safety and ethics in mind, and not just profit and military benefit. And they’ll vote for measures that promote AI stewardship. But the clock is ticking and the field is advancing incredibly rapidly, and behind closed doors. The world would’ve been better off had nuclear fission not been developed in secret.

What are you working on next? Can we expect you to be a voice of informed commentary on AI in the future? Who else should people be paying attention to if they want to stay on top of this issue?

I recently finished a PBS film called Spillover and am developing a couple of other films right now. I’m speaking about AI issues to corporations and colleges, which I enjoy and take seriously as part of my mandate to get the word out. There is a dearth of good speakers about AI Risk. I’m thinking about another book about AI, and about making my book into a documentary film, but as I said, I’m a little skeptical about a film’s power to influence this issue. For more information I suggest looking up organizations such as the Machine Intelligence Research Institute and The Future of Life Institute.

Thank you, James…
… To be continued… I hope.

0 thoughts on “A conversation with James Barrat”

  1. I read a novel a few years back that used an AI in a quantum computer from the near-future (travelling back in time of course) as a plot device. The scary part was just how intelligent the AI was with quantum computing. With recent breakthroughs in quantum computing, I can imagine AI developers getting ready for another leap forward.

    Reply

Leave a Comment