I am concerned about artificial general intelligence (AGI) and its likely rapid successor, artificial superintelligence (ASI). I have written here previously about that topic, after reading Nick Bostrom’s book Superintelligence. I have just finished another book on that topic, Our Final Invention, by James Barrat. I think it’s actually a better introduction to the topic than Bostrom, because it’s written in a more journalistic, less academic style. Most chapters read like a piece you might find in WIRED or the New Yorker. Attention is given to the personalities of the experts being interviewed, the quirky historical backstories, the setting where the interview takes place. It’s more people-centered, in other words – something important to remind you of what you value as a human, considering the rest of the content discussed. It’s more approachable. In addition, I valued reading it because Barrat is also more focused on the urgency of the risk of artificial intelligence (AI), whereas that’s only a part of Bostrum’s purview. Bostrum spends time on things like the rights of superhuman intelligences, but Barrat’s more focused on how hard it’s going to be to control them.
So if you haven’t given any thought to AI, I strongly encourage you to do so. It represents an economic force that will be deeply unsettling to civilization even if its deployment goes absolutely perfectly. And it is a topic that is of vital interest to humanity if it goes even slightly wrong. Multiple groups working for our government (defense, mainly), foreign governments, private industry and academia are working very, very hard rich now to achieve artificial intelligence, and one of them is going to get there first. One of the greatest “take-aways” I got from Barrat’s book is how vital that “first mover” motivation is. There is a super strong strategic advantage to whoever achieves AGI first – and therefore a tremendous economic / military incentive to race to be the first. It dis-incentivizes the different players from trusting one another, sharing information, or coordinating their strategies for safety. It’s like developing the atom bomb, except that private companies beholden to their shareholders are also in the race, and the atom bomb starts acting on its own once it exists. Will Google win? Will DARPA? Will someone else? How much do we trust that winner to have put appropriate safeguards in place? How much do we trust them to wield their new superpower appropriately? This really matters. One of them will get there first, and it could be in 20 years, or it could be tomorrow morning.
Because of recursive self-improvement, the first AGI has a strong likelihood of becoming the first ASI, and if that happens we will be in a truly unprecedented situation – suddenly humanity will be communicating with, bargaining with, and evaluating the trustworthiness of something that is both smarter than us, faster thinking than us, and also totally alien. Its intelligence will not be the result of millions of years of hunting and gathering on the savanna. It will not think like we think. It will be goal-seeking, powerful in ways we are not, and it will not share our values. It can copy itself, making multiple versions that may collaborate, widening the odds still further. Right now, a huge percentage of stock trading on modern Wall Street is conducted by “narrow” AI over the course of milliseconds. An AGI or ASI with access to those algorithms could make a lot of money in order to further its goals. If the AI cannot control a critical piece of hardware it needs to accomplish its goals, it might quickly spin up a fortune with which to entice a human accomplice or patsy. An ASI in search of more resources might decide to leave this planet and make a ship to do so. It might make self-replicating ships that turn asteroids into mines and factories and thus into new ships bearing a clone of the programming of the ASI. These hypothetical ships could travel space with far less risk than a human astronaut could, with none of the issues of radiation poisoning, need for nutrition, or cabin fever. Once ships like these depart Earth, we won’t be able to catch them. I know it sounds hyperbolic, but I don’t see a broken link in the chain of reasoning where it’s implausible or unlikely that the first ASI would take over the galaxy. As far I can sensibly tell, this actually isn’t crazy talk. Could the stakes be any higher?
We have a situation where multiple groups of smart people — some we know about, some we don’t — are working on building a thing that will be as smart as us or smarter. Sooner or later, one of them will succeed. That success might be managed well, in which case its creators could accrue some serious benefits, which they may or may not share with a wider swath of humanity. If it is not managed well, there will be negative consequences that might range from small disasters to large catastrophes. And the odds are that it won’t be managed perfectly. How bad will the mismanagement be? The tool being mismanaged has unprecedented power, so what would be a small goof-up under the normal circumstances of deploying novel technology (e.g., the inadvertant combustibility of the Samsung Galaxy Note 7) suddenly carries much higher stakes.
Barrat spends time in the penultimate chapter documenting Stuxnet, a malware “worm” developed by US and Israeli intelligence which (a) probably succeeded in delaying Iran’s nuclear weapons program by a couple of years, but (b) is now being traded on the black market. It has the ability to zero in on a key piece of software controlling a physical component (uranium-enriching centrifuges, in Iran’s case) and break them. Barrat points out several relevant lessons to be drawn from the episode: (1) targeted software can compromise physical hardware with real-world consequences, (2) Stuxnet was programmed to seek out the centrifuges but ignore other components as part of its strategy of stealth, but the centrifuge-operating software “target” could be swapped out for something vital to a different system, like our increasingly-integrated “smart grid” of power supply, and (3) we didn’t do a very good job controlling Stuxnet, and that’s when the only competitors were mere humans. In reading Barrat’s depiction of the consequences of a sustained power outage on our nation (when there’s no electricity to run the fuel pumps at the gas stations, the trucks don’t run, which means groceries don’t get delivered, and people starve), I was surprised at the realization of how vulnerable I really am.
I encourage you to read Barrat’s first chapter. I can’t see how anyone reading the near-future parable of “the Busy Child” wouldn’t be motivated to read the rest of the book – and then Bostrom’s. As a society, we need to be talking about this. It needs to be high in society’s hierarchy of concerns, alongside ecosystem coherence and resilience, emerging diseases, antibiotic resistance, large asteroid/comet impacts, and climate change. We are not preparing the way we should be, and I think that’s largely because most people are unaware how close this threat really looms.
Please take the time to familiarize yourself with the basics of AI. I want to be able to talk about it with you without sounding like an alarmist loon. The best place to start would be Barrat’s book*.
______________________________________________________
* If you don’t have time to read a whole book, I’ll bet you have half an hour to invest in these two excellent blog posts by Tim Urban: The AI Revolution: The Road to Superintelligence, and The AI Revolution: Our Immortality or Extinction. or watch Sam Harris’s TED talk on the topic.
Thanks for recommending this book. Very interesting!
Here’s a perspective, though: Humanity is a system that has the emergent property of being a plague upon the Earth. In other words, it sure looks as though we’re on course to damage the Earth to the point where the great majority of us, at least, will be reduced to grim subsistence, and the biosphere decimated.
If ASI emerges slowly enough that we catch it in our nets, how does it not become entrained into this emergent outcome?
If it evades our nets, then at least the outcome is not known.
That’s just a perspective; I’m not saying it’s so, but it seemed worth voicing.