Should anything save us, it will be technology. But you need more than tautologies to save the planet, and, especially within the futurist fraternity of Silicon Valley, technologists have little more than fairy tales to offer. Over the last decade, consumer adoration has anointed those founders and venture capitalists something like shamans, Ouija-boarding their way toward blueprints for the world’s future. But conspicuously few of them seem meaningfully concerned about climate change. Instead, they make parsimonious investments in green energy (Bill Gates aside) and fewer still philanthropic payouts (Bill Gates again aside), and often express the perspective, outlined by Eric Schmidt, that climate change has already been solved, in the sense that a solution has been made inevitable by the speed of technological change—or even by the introduction of a particular self-advancing technology, namely machine intelligence, or AI.
Blind faith is one way of describing this worldview, though many in Silicon Valley regard machine intelligence with blind terror. Another way of looking at it is that the world’s futurists have come to regard technology as a superstructure within which all other problems, and their solutions, are contained. From that perspective, the only threat to technology must come from technology, which is perhaps why so many in Silicon Valley seem less concerned with runaway climate change than they are with runaway artificial intelligence: the only fearsome power they are likely to take seriously is the one they themselves have unleashed. It is a strange evolutionary stage for a worldview midwifed into being, in the permanent counterculture of the Bay Area, by Stewart Brand’s nature-hacking bible, Whole Earth Catalog. And it may help explain why social media executives were so slow to process the threat that real-world politics posed to their platforms; and perhaps also why, as the science fiction writer Ted Chiang has suggested, Silicon Valley’s fear of future artificial-intelligence overlords sounds suspiciously like an unknowingly lacerating self-portrait, panic about a way of doing business embodied by the tech titans themselves:
Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do—grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.
—
Sometimes it can be hard to hold more than one extinction-level threat in your head at once. Nick Bostrom, the pioneering philosopher of AI, has managed it. In an influential 2002 paper taxonomizing what he called “existential risks,” he outlined twenty-three of them—risks “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”
Bostrom is not a lone doomsday intellectual but one of the leading thinkers currently strategizing ways of corralling, or at any rate conceptualizing, what they consider the species-sized threat from an out-of-control AI. But he does include climate change on his big-picture risk list. He puts it in the subcategory “Bangs,” which he defines as the possibility that “earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.” “Bangs” is the longest of his sub-lists; climate change shares the category with, among others, Badly programmed superintelligence and We’re living in a simulation and it gets shut down.
In his paper, Bostrom also considers the climate-change-adjacent risk of “resource depletion or ecological destruction.” He places that threat in his next category, “Crunches,” which he describes as an episode after which “the potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form.” His most representative crunch risk is probably Technological arrest: “the sheer technological difficulties in making the transition to the posthuman world might turn out to be so great that we never get there.” Bostrom’s final two categories are “Shrieks,” which he defines as the possibility that “some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable,” as in the case of “Take-over by a transcending upload” or “Flawed superintelligence” (as opposed to “Badly programmed superintelligence”); and “Whimpers,” which he defines as “a posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.”
As you may have noticed, although his paper sets out to analyze “human extinction scenarios,” none of his threat assessments beyond “Bangs” actually mention “humanity.” Instead, they are focused on what Bostrom calls “posthumanity” and others often call “transhumanism”—the possibility that technology may quickly carry us across a threshold into a new state of being, so divergent from the one we know today that we would be forced to consider it a true rupture in the evolutionary line. For some, this is simply a vision of nanobots swimming through our bloodstreams, filtering toxins and screening for tumors; for others, it is a vision of human life extracted from tangible reality and uploaded entirely to computers. You may notice here an echo of the Anthropocene. In this vision, though, humans aren’t burdened with environmental wreckage and the problem of navigating it; instead, we simply achieve a technological escape velocity.
It is hard to know just how seriously to take these visions, though they are close to universal among the Bay Area’s futurist vanguard, who have succeeded the NASAs and the Bell Labs of the last century as architects of our imagined future—and who differ among themselves primarily in their assessments of just how long it will take for all this to come to pass. Peter Thiel may complain about the pace of technological change, but maybe he’s doing so because he’s worried it won’t outpace ecological and political devastation. He’s still investing in dubious eternal-youth programs and buying up land in New Zealand (where he might ride out social collapse on the civilization scale). Y Combinator’s Sam Altman, who has distinguished himself as a kind of tech philanthropist with a small universal-basic-income pilot project and recently announced a call for geoengineering proposals he might invest in, has reportedly made a down payment on a brain-upload program that would extract his mind from this world. It’s a project in which he is also an investor, naturally.
For Bostrom, the very purpose of “humanity” is so transparently to engineer a “posthumanity” that he can use the second term as a synonym for the first. This is not an oversight but the key to his appeal in Silicon Valley: the belief that the grandest task before technologists is not to engineer prosperity and well-being for humanity but to build a kind of portal through which we might pass into another, possibly eternal kind of existence, a technological rapture in which conceivably many—the billions lacking access to broadband, to begin with—would be left behind. It would be very hard, after all, to upload your brain to the cloud when you’re buying pay-as-you-go data by the SIM card.
The world that would be left behind is the one being presently pummeled by climate change. And Bostrom isn’t alone, of course, in identifying that risk as species-wide. There are the thousands, perhaps hundreds of thousands, of scientists now seeming to scream daily, with each extreme-weather event and new research paper, for the attention of lay readers; and no more hysterical a figure than Barack Obama was fond of using the phrase “existential threat.” And yet it is perhaps a sign of our culture’s heliotropism toward technology that aside perhaps from proposals to colonize other planets, and visions of technology liberating humans from most biological or environmental needs, we have not yet developed anything close to a religion of meaning around climate change that might comfort us, or give us purpose, in the face of possible annihilation.