Then Axel saw the empty recreation area on the monitor. It was a dream. He was having trouble distinguishing real from imaginary. The only real thing was the tear on his cheek.
Once Ryan Junior came in and spoke with him, or so he thought. He brought a scratched up, folded piece of paper Axel had once given him. He tried to recite some words without looking at the paper. It sounded familiar, but some of the words were missing, and much of it was out of order. But Ryan Junior’s brow was deeply furrowed in concentration. He was trying hard. Despite the botched recitation, this alone made Axel happy. Axel’s regret at teaching him the ritual evaporated when he saw his look of concentration, when he saw his concerted effort.
But he couldn’t be sure it actually happened.
At times Nelly’s voice would echo through his mind. Calming words. Questions. Nothing he could bring himself to respond to. Ryan Junior seemed to appear in front of him, and then he was gone, and then he was back again. He was crying, and one time he thought for sure he must have leaned over and hugged him.
It must have been a dream.
Then the visions faded, the rhythmic undulations of his chest slowed and stopped, the soft hum of the instruments around him purred away to nothing, and his heart beat for the last time.
AFTERWORD
Detonation, it should be stressed, is entirely a work of fiction. Aside from a few quotes and historical anecdotes, similarity with real people or organizations are entirely by accident or coincidental. However, certain themes and ideas that have been included are undeniably by design. For that reason I feel it appropriate to provide additional context on those ideas, especially as they relate to my personal beliefs.
Before I get ahead of myself, thank you for reading Detonation. I hope, above all, that you find it to be a compelling tale. But that’s not all. I hope that in some small part it accomplishes more than that. Detonation deliberately wrestles with social, technical, and philosophical arguments related to the existential risk of superintelligent machines. I hope you have read it with a limber mind, and that it opens up avenues of consideration around this increasingly important topic.
To be frank, I do believe a Detonation-like narrative is entirely plausible in our future. If anything, Detonation underestimates the potential for a superintelligent machine to subvert our authority and gain instrumental resources. Remember, I am but a mere meatbag author. A superintelligent machine would make short work of me and my lackluster imagination, I assure you.
The basis for my beliefs on this topic and the arguments outlined in this book come from more than intuition. They have germinated during months of researching the topic thoroughly, and are supported by my personal work history in health-care industry software development. This journey of discovery made me believe that 1) superintelligence is only a matter of time, 2) it may arrive within a generation or even sooner, 3) we are not in the slightest way prepared for it, and 4) it poses a substantial risk of catastrophic damage to our society.
Don’t get me wrong. Superintelligent machines can do incredible things for humanity. They could cure diseases, dramatically improve our standard of living and extend our lives. I believe in that promise, but what I don’t understand is why we are doing essentially nothing to mitigate the risks.
Even if you believe superintelligent machines can be easily controlled, we have not begun to address the inevitable labor-market dislocations and financial inequalities of even the most benevolent superintelligent machines. Can this godlike power be entrusted to our corporate and governmental leaders? There are Prestons in this world; there are Bartzs; and there are Rourke Ramas. We need to protect ourselves from them. It would be unwise to give them unconstrained access to superintelligent machines.
I believe in Bhavin Nadar’s arguments at the end of Part 1. The existential risk of superintelligent machines is greater—and more imminent—than any other class of existential risk including climate change, nuclear proliferation, and synthetic biology. Furthermore, mitigation of any catastrophic events will be more difficult. Ironically, if we get it right, superintelligence will greatly improve our ability to mitigate all these other existential risks. It could be the last invention we need to make. Right now, though, that is one big if.
We have virtually no resources allocated to getting it right. Machine intelligence control issues remain almost entirely unexplored and regulation remains a perilous void. We have no safety standards, we have no global coalitions, and the general population has a fleeting understanding of the risks. For a force of innovation many believe will power our economies for decades to come, shouldn’t governments be investing trillions of dollars to make sure we get it right?
Nick Bostrom puts it better than I can when he says, “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct.”
If these words scare you, good. If this book scares you, good. Too few of us are afraid. Too few of us have the appropriate visceral reaction. I want you to become aware, even if it makes you fearful. Maybe then some of us will be moved to make choices that will reduce our risk. Maybe then the wheels of change will dislodge from their rusty moorings and begin to turn.
We don’t have to be the Old World. Let’s not wait until our house is burning to build a well. We need to be sure the initial conditions for superintelligence can be set correctly, and safely, as soon as possible. To do that, we need to build regulatory bodies, conduct safety research, build global coalitions and more. Every day we don’t invest a significant amount of time, money and energy into this, is a day in which Gail has a greater chance of taking our future away from us.
Or to summarize this whole Afterword in mule-speak, let’s keep it between the ditches, people.
For those of you who want to explore this topic further, many of the arguments you read about in Detonation are discussed in more detail in great works such as Superintelligence by Nick Bostrom, Life 3.0 by Max Tegmark and Warnings by Richard A. Clarke and R.P. Eddy.
If you would like to know what you can do to help, or for more information about the risks posed by superintelligence, please visit ethagi.org. Let’s get to work!
Thank you again for reading. I welcome any and all feedback at erik-a-otto.com.
POST-LAUNCH ADDENDUM
At this point (as of December 2018) Detonation has been released for seven months. On the whole, the response has been quite positive. I am humbled and honored that just recently it has been named to Kirkus Reviews Best Books of 2018. To further AGI safety advocacy, I have decided to donate any Detonation book sale profits to Ethagi Inc, a non-profit dedicated to promoting the safe and ethical use of AGI.
But I am an independent publisher, with limited reach. My distributing presence is principally through Amazon.com, which has resulted in some challenges; sales have been adversely impacted by some negative reviews on the site. This is to be expected, to a degree, but one of these reviews, in particular, is ranked at the top because many potential readers are finding it “helpful” as a warning that there are “no likeable characters” in the book. I strongly disagree with this premise, and that’s fine. I can live with a disagreement of opinion. What I find particularly disheartening about this review, and the fact that it is top-ranked, is that it bases its arguments on a number of exaggerations and out-of-context information that is misleading to potential readers.