And all this is true when there’s no fear, anger, or hope involved. Toss in a strong emotion and people can easily become—to use a term coined by Cass Sunstein—“probability blind.” The feeling simply sweeps the numbers away. In a survey, Paul Slovic asked people if they agreed or disagreed that a one-in-10 million lifetime risk of getting cancer from exposure to a chemical was too small to worry about. That’s an incredibly tiny risk—far less than the lifetime risk of being killed by lightning and countless other risks we completely ignore. Still, one-third disagreed; they would worry. That’s probability blindness. The irony is that probability blindness is itself dangerous. It can easily lead people to overreact to risks and do something stupid like abandoning air travel because terrorists hijacked four planes.
It’s not just the odds that can be erased from our minds by the Good-Bad Rule. It is also costs. “It’s worth it if even one life is saved” is something we often hear of some new program or regulation designed to reduce a risk. That may be true, or it may not. If, for example, the program costs $100 million and it saves one life, it is almost certainly not worth it because there are many other ways $100 million could be spent that would certainly save more than one life.
This sort of cost-benefit analysis is itself a big and frighteningly complex field. One of the many important insights it has produced is that, other things being equal, “wealthier is healthier.” The more money people and nations have, the healthier and safer they tend to be. Disaster relief people see this maxim in operation every time there is a major earthquake. People aren’t killed by earthquakes. They are killed by buildings that collapse in earthquakes and so the flimsier the buildings, the more likely people are to die. This is why earthquakes of the same magnitude may kill dozens in California but hundreds of thousands in Iran, Pakistan, or India. The disparity can be seen even within the same city. When a massive earthquake struck Kobe, Japan, in 1995, killing 6,200 people, the victims were not randomly distributed across the city and region. They were overwhelmingly people living in poor neighbourhoods.
Government regulations can reduce risk and save lives. California’s buildings are as tough as they are in part because building codes require them to be. But regulations can also impose costs on economic activity, and since wealthier is healthier, economic costs can, if they are very large, put more lives at risk than they keep safe. Many researchers have tried to estimate how much regulatory cost is required to “take the life” of one person, but the results are controversial. What’s broadly accepted, however, is the idea that regulations can inflict economic costs and economic costs can reduce health and safety. We have to account for that if we want to be rational about risk.
We rarely do, of course. As political scientist Howard Margolis describes in Dealing with Risk, the public often demands action on a risk without giving the slightest consideration to the costs of that action. When circumstances force us to confront those costs, however, we may change our minds in a hurry. Margolis cites the case of asbestos in New York City’s public schools, which led to a crisis in 1993 when the start of the school year had to be delayed several weeks because work to assess the perceived danger dragged on into September. Parents had overwhelmingly supported this work. Experts had said the actual risk to any child from asbestos was tiny, especially compared to the myriad other problems poor kids in New York faced, and the cost would be enormous. But none of that mattered. Like the cancer it can cause, asbestos has the reputation of a killer. It triggers the Good-Bad Rule, and once that happens, everything else is trivial. “Don’t tell us to calm down!” one parent shouted at a public meeting. “The health of our children is at stake.”
But when the schools failed to open in September, it was a crisis of another kind for the parents. Who was going to care for their kids? For poor parents counting on the schools opening when they always do, it was a serious burden. “Within three weeks,” Margolis writes, “popular sentiment was overwhelmingly reversed.”
Experiences like these, along with research on the role of emotion in judgment, have led Slovic and other risk researchers to draw several conclusions. One is that experts are wrong to think they can ease fears about a risk simply by “getting the facts out.” If an engineer tells people they shouldn’t worry because the chance of the reactor melting down and spewing vast radioactive clouds that would saturate their children and put them at risk of cancer . . . well, they won’t be swayed by the odds. Only the rational mind— Head—cares about odds and, as we have seen, most people are not accustomed to the effort required for Head to intervene and correct Gut. Our natural inclination is to go with our intuitive judgment.
Another important implication of the Good-Bad Rule is something it shares with the Rule of Typical Things: It makes us vulnerable to scary scenarios. Consider the story told by the Bush administration in support of the invasion of Iraq. It was possible Saddam Hussein would seek to obtain the materials to build nuclear weapons. It was possible he would start a nuclear weapons program. It was possible the program would successfully create nuclear weapons. It was possible Saddam would give those weapons to terrorists. It was possible that terrorists armed with nukes would seek to detonate them in an American city, and it was possible they would succeed. All these things were possible, but a rational assessment of this scenario would examine the odds of each of these events occurring on the understanding that if even one of them failed to occur, the final disaster would not happen. But that’s not how Gut would analyze it with the Good-Bad Rule. It would start at the other end—an American city reduced to radioactive rubble, hundreds of thousands dead, hundreds of thousands more burned and sick—and it would react. This is an Awful Thing. And that feeling would not only color the question of whether this is likely or not, it would overwhelm it, particularly if the scenario were described in vivid language—language such as the White House’s oft-repeated line, “We don’t want the smoking gun to be a mushroom cloud.”
Like terrorists armed with a nuclear weapon, an asteroid can also flatten a city. But asteroids are only rocks. They are not wrapped in the cloak of evil as terrorists are, nor are they stigmatized like cancer, asbestos, or nuclear power. They don’t stir any particular emotion, and so they don’t engage the Good-Bad Rule and overwhelm our sense of how very unlikely they are to hurt us. The Example Rule doesn’t help, either. The only really massive asteroid impact in the modern era was the Tunguska event, which happened a century ago in a place so remote only a handful of people saw it. There have been media reports of “near misses” and a considerable amount of attention paid to astronomers’ warnings, but while these may raise conscious awareness of the issue, they’re very different from the kind of concrete experience our primal brains are wired to respond to. Many people also know of the theory that an asteroid wiped out the dinosaurs, but that’s no more real and vivid in our memories than the Tunguska event, and so the Example Rule would steer Gut to conclude that the risk is tinier than it actually is.
There is simply nothing about asteroids that could make Gut sit up and take notice. We don’t feel the risk. For that reason, Paul Slovic told the astronomers at the Tenerife conference, “It will be hard to generate concern about asteroids unless there is an identifiable, certain, imminent, dreadful threat.” And of course, when there is an identifiable, certain, imminent, dreadful threat, it will probably be too late to do anything about it.