A new book on artificial intelligence (AI) has just been published. It’s Life 3.0: Being Human in the Age of Artificial Intelligence, by MIT physicist Max Tegmark. Tegmark was one of the trailblazing thinkers interviewed by James Barrat in his book Our Final Invention, which I thought was terrific, so I was eager to see what he had to say when writing for himself. I finished the audiobook version of Life 3.0 on my commute home last night. It’s good.
Tegmark is the founder of the Future of Life Institute (FLI), a nonprofit dedicated to helping life toward a better future by cultivating conversation and knowledge about existential threats, including nuclear war, climate change, biotech pitfalls, and AI. Mostly so far, FLI has focused on AI. They organized two successive conferences on AI safety and achieved a kind of consensus in the AI community with several documents that outline a vision for AI safety and key research questions that need to be addressed before an artificial general intelligence is developed.
Tegmark’s perspective is determinedly optimistic. He envisions life as having three stages:
Life 1.0: initial replication, like microbes (initiated ~3.5 billion years ago), which is driven by biological evolution;
Life 2.0: consciousness, like humans (initiated ~2 million years ago), which is driven by cultural evolution; and
Life 3.0: artificial general intelligence and superintelligence (likely to initiate ~10s to 100s of years in the future), which is driven by technological evolution.
This is typical of the “big picture” way that Tegmark thinks, zooming way out to look at the biggest of big patterns, and searching for the leanest, most parsimonious and inclusive definitions. He organizes his thinking around such “simple” themes, but dives into tremendous detail as he explores them. For the most part, his book communicates his ideas in a clear, engaging fashion. For instance, he offers an analogy from computers to help visualize his sequence of milestones in life’s evolution: Life 1.0 (everything from E. coli to Pan paniscus) has both its hardware (body) and software (behavior) encoded by genes, with upgrades taking place courtesy of natural selection over generations, for populations. Life 2.0 (us) on the other hand, is stuck with the hardware its inherited, but can upgrade its software through learning, and we can do that for an individual, within a time shorter than its lifespan (for instance learning how to change a car’s sparkplugs in a few minutes by watching a YouTube video). Life 3.0 (AI) will be able to upgrade both its software (copying and pasting, installing new programs, etc.) and its hardware (by building robotic bodies with any sort of toolkit needed) as it finds the need to do so.
In part of the book, he explores a great many future scenarios, some better and some worse (some catastrophically bad). He begins in the Prologue with the story of a Silicon Valley startup that is the first to make an artificial superintelligence, and how they use it to first make a lot of money and then to take over the world. But it stops before you find out what the start-up folks do with their new power. Later in the book, he returns to their story, and finishes it in a bunch of different ways, like how the Clue movie had several different endings. These scenarios are diverse, and involve many factors including a lot of hypothetical technology that arrays them along a spectrum of plausibility in addition to frightfulness. Throughout, Tegmark remains a sober but enthusiastic analyst. There’s no “Terminator/Skynet” scenario in this list. It’s not malice we have to fear from AI, he argues; it’s competence.
This leads to a discussion of what our toolkit looks like at this point: tools to prevent the worst from happening and promote the best to happen. Verification, validation, and security are each examined in turn. One thing that I found distinctive about Tegmark’s book as opposed to Nick Bostrum’s or James Barrat’s was his emphasis on how the pitfalls of modern technology might apply equally well to AI. Bugs in software cause all kinds of hassles in the modern world. What if our superintelligent AI gets a bug: Can we reboot it? And then there’s hacking: malicious humans can hack all sorts of things in the modern world: will they be able to hack other people’s AI too?
Another intriguing thread in the conversation came through Tegmark’s discussion of our cosmic endowment – the relationship of life to the universe. He argues that life, specifically conscious* life, gives meaning to an otherwise dead cosmos. Whether it’s biological life 2.0 or it’s AI, what’s the point of the world without living intelligence? Tegmark sees the ultimate value of figuring AI out right to be nothing less than the fate of the universe. One way to look at Life 3.0, he says, is as our evolutionary children: watching AI go forth into the cosmos as proud (but ultimately outclassed) aging parents. I’m used to thinking about “AI going to space” as a question of human responsibility, but this positive spin on it made me feel better.
He also raised the intriguing notion that if other technological civilizations elsewhere in the cosmos have developed artificial superintelligence, we should perhaps be very wary of any signals we might get from them. After all, if they didn’t do it right, perhaps their spacefaring AI might wreak their mistakes on us. For instance, what if we received a computer program from a distant star, sent as an encoded beam of light? If we run such a program (think Contact), we might be installing an alien superintelligence on our computers, only to have it begin working to convert Earth’s matter into a series of radio antennae – thereby broadcasting the ultimate malware to still further corners of the cosmos, and at the speed of light. Yikes: we need to protect the universe from a bad AI spawned by us, but we also have to potentially protect ourselves from bad AI spawned elsewhere, perhaps billions of light-years away, perhaps billions of years ago.
* Significant time is given over to exploring if machines could be conscious by exploring what neuroscientists think consciousness really is, and how we might measure it in other entities to verify its presence or absence. This is a critical point for Tegmark, because only if AI is conscious do questions of ethics become relevant (as applied to the machines themselves). Consciousness is a prerequisite for happiness, and if our goal is to maximize happiness in the future, the first goal must be to retain consciousness in an imperiled future.
I came away with a sense that Tegmark must be a very organized teacher, since his chapter structure is essentially: a) tell you what I’m going to tell you, b) tell you and elaborate, and c) recap what I just told you. At the conclusion of each chapter, there’s a “Bottom Line” paragraph that economically summarizes each chapter’s key points and these make a terrific summary of the book that would fit into two or three pages.
In the Epilogue, he shares the story of developing FLI: it’s a useful way of wrapping things up, because it brings in his personal intellectual journey in thinking about (and acting about) AI and ends in a much more hopeful, positive place than it begins. AI is the finish line in a race, and no one knows how far away that finish line is. But many people are running. We want to be ready when one of them crosses that finish line, and we find our intellects matched by (or dwarfed by) something that is not a human. Tegmark asserts that you and I are guardians of the future of life, and we need to be talking now about how we want the aftermath of this race to turn out. His emphasis in Life 3.0 is: think about what you want from AI. If you don’t know what you want, you’re unlikely to get it. So let’s start talking about it.