But despite a lot of effort over many years, the astronomers couldn’t get the money to finish the job. A frustrated Clark Chapman attended the conference in Tenerife. It had been almost twenty-five years since the risk was officially recognized, the science wasn’t in doubt, public awareness had been raised, governments had been warned, and yet the progress was modest. He wanted to know why.
To help answer that question, the conference organizers brought Paul Slovic to Tenerife. With a career that started in the early 1960s, Slovic is one of the pioneers of risk-perception research. It’s a field that essentially began in the 1970s as a result of proliferating conflicts between expert and lay opinion. In some cases—cigarettes, seat belts, drunk driving—the experts insisted the risk was greater than the public believed. But in more cases— nuclear power was the prime example—the public was alarmed by things most experts insisted weren’t so dangerous. Slovic, a professor of psychology at the University of Oregon, cofounded Decision Research, a private research corporation dedicated to figuring out why people reacted to risks the way they did.
In studies that began in the late 1970s, Slovic and his colleagues asked ordinary people to estimate the fatality rates of certain activities and technologies, to rank them according to how risky they believed them to be, and to provide more details about their feelings. Do you see this activity or technology as beneficial? Something you voluntarily engage in? Dangerous to future generations? Little understood? And so on. At the same time, they quizzed experts—professional risk analysts—on their views.
Not surprisingly, experts and laypeople disagreed about the seriousness of many items. Experts liked to think—and many still do—that this simply reflected the fact that they know what they’re talking about and laypeople don’t. But when Slovic subjected his data to statistical analyses it quickly became clear there was much more to the explanation than that.
The experts followed the classic definition of risk that has always been used by engineers and others who have to worry about things going wrong: Risk equals probability times consequence. Here, “consequence” means the body count. Not surprisingly, the experts’ estimate of the fatalities inflicted by an activity or technology corresponded closely with their ranking of the riskiness of each item.
When laypeople estimated how fatal various risks were, they got mixed results. In general, they knew which items were most and least lethal. Beyond that, their judgments varied from modestly incorrect to howlingly wrong. Not that people had any clue that their hunches might not be absolutely accurate. When Slovic asked people to rate how likely it was that an answer was wrong, they often scoffed at the very possibility. One-quarter actually put the odds of a mistake at less than 1 in 100—although 1 in 8 of the answers rated so confidently were, in fact, wrong. It was another important demonstration of why intuitions should be treated with caution— and another demonstration that they aren’t.
The most illuminating results, however, came out of the ranking of riskiness. Sometimes, laypeople’s estimate of an item’s body count closely matched how risky they felt the item to be—as it did with the experts. But sometimes there was little or no link between “risk” and “annual fatalities.” The most dramatic example was nuclear power. Laypeople, like experts, correctly said it inflicted the fewest fatalities of the items surveyed. But the experts ranked nuclear power as the twentieth most risky item on a list of thirty, while most laypeople said it was number one. Later studies had ninety items, but again nuclear power ranked first. Clearly, people were doing something other than multiplying probability and body count to come up with judgments about risk.
Slovic’s analyses showed that if an activity or technology were seen as having certain qualities, people boosted their estimate of its riskiness regardless of whether it was believed to kill lots of people or not. If it were seen to have other qualities, they lowered their estimates. So it didn’t matter that nuclear power didn’t have a big body count. It had all the qualities that pressed our risk-perception buttons, and that put it at the top of the public’s list of dangers.
1. Catastrophic potentiaclass="underline" If fatalities would occur in large numbers in a single event—instead of in small numbers dispersed over time— our perception of risk rises.
2. Familiarity: Unfamiliar or novel risks make us worry more.
3. Understanding: If we believe that how an activity or technology works is not well understood, our sense of risk goes up.
4. Personal controclass="underline" If we feel the potential for harm is beyond our control—like a passenger in an airplane—we worry more than if we feel in control—the driver of a car.
5. Voluntariness: If we don’t choose to engage the risk, it feels more threatening.
6. Children: It’s much worse if kids are involved.
7. Future generations: If the risk threatens future generations, we worry more.
8. Victim identity: Identifiable victims rather than statistical abstractions make the sense of risk rise.
9. Dread: If the effects generate fear, the sense of risk rises.
10. Trust: If the institutions involved are not trusted, risk rises.
11. Media attention: More media means more worry.
12. Accident history: Bad events in the past boost the sense of risk.
13. Equity: If the benefits go to some and the dangers to others, we raise the risk ranking.
14. Benefits: If the benefits of the activity or technology are not clear, it is judged to be riskier.
15. Reversibility: If the effects of something going wrong cannot be reversed, risk rises.
16. Personal risk: If it endangers me, it’s riskier.
17. Origin: Man-made risks are riskier than those of natural origin.
18. Timing: More immediate threats loom larger while those in the future tend to be discounted.
Many of the items on Slovic’s list look like common sense. Of course something that puts children at risk presses our buttons. Of course something that involves only those who choose to get involved does not. And one needn’t have ever heard of the Example Rule to know that a risk that gets more media attention is likely to bother us more than one that doesn’t.
But for psychologists, one item on the list—“familiarity”—is particularly predictable, and particularly important. We are bombarded with sensory input, at every moment, always. One of the most basic tasks of the brain is to swiftly sort that input into two piles: the important stuff that has to be brought to the attention of the conscious mind and everything else. What qualifies as important? Mostly, it’s anything that’s new. Novelty and unfamiliarity—surprise—grab our attention like nothing else. Drive the same road you’ve driven to work every day for the last twelve years and you are likely to pay so little conscious attention that you may not remember a thing you’ve seen when you pull into the parking lot. That is if the drive is the same as it always is. But if, on the way to work, you should happen to see a naked, potbellied man doing calisthenics on his front lawn, your consciousness will be roused from its slumber and you will arrive at work with a memory you may wish were a little less vivid.