That’s what happens when our judgments about risk go out of whack. There are deadly consequences.
So it’s important to understand why we so often get risk wrong. Why do we fear a proliferating number of relatively minor risks? Why do we so often shrug off greater threats? Why have we become a “culture of fear”?
Part of the answer lies in self-interest. Fear sells. Fear makes money. The countless companies and consultants in the business of protecting the fearful from whatever they may fear know it only too well. The more fear, the better the sales. So we have home-alarm companies frightening old ladies and young mothers by running ads featuring frightened old ladies and young mothers. Software companies scaring parents with hype about online pedophiles. Security consultants spinning scenarios of terror and death that can be avoided by spending more tax dollars on security consultants. Fear is a fantastic marketing tool, which is why we can’t turn on the television or open a newspaper without seeing it at work.
Of course, private companies and consultants aren’t the only merchants of fear. There are politicians who talk up threats, denounce their opponents as soft or incompetent, and promise to slay the wolf at the door just as soon as we do the sensible thing and elect them. There are bureaucrats plumping for bigger budgets. Government-sponsored scientists who know the rule is “no problem, no funding.” And there are the activists and nongovernmental organizations who know they’re only as influential as their media profile is big and that the surest way to boost that profile is to tell the scary stories that draw reporters like vultures to corpses.
The media, too, know the value of fear. The media are in the business of profit, and crowding in the information marketplace means the competition for eyes and ears is steadily intensifying. Inevitably and increasingly, the media turn to fear to protect shrinking market shares because a warning of mortal peril—“A story you can’t afford to miss!”—is an excellent way to get someone’s attention.
But this is far from a complete explanation. What about the serious risks we don’t pay much attention to? There’s often money to be made dealing with them, but still we are unmoved. And the media, to be fair, occasionally cast cold water on panics and unreasonable fears, while corporations, activists, and politicians sometimes find it in their interest to play down genuine concerns—as the British government tried and failed to do in the early 1990s, when there was growing evidence linking BSE (mad cow disease) in cattle and a variant of the Creutzfeldt-Jakob disease in humans. The link was real. The government insisted it wasn’t. A cabinet minister even went so far as to hold a press conference at which he fed his four-year-old daughter a hamburger made of British beef.
Clearly, there’s much more than self-interest and marketing involved. There’s culture, for one. Whether we fear this risk or that—or dismiss another as no cause for concern—often depends on our cultural values. Marijuanais a perfect example. Since the days of Depression-era black jazz musicians, pot has been associated with a hipster counterculture. Today, the young backpacker wearing a T-shirt with the famous multi-leaf symbol on it isn’t expressing his love of horticulture—it’s a statement of cultural identity. Someone like that will have a very strong inclination to dismiss any claim that marijuana may cause harm as nothing more than old-fashioned reefer madness. The same is true in reverse: For social conservatives, that cluster of leaves is a symbol of the anarchic liberalism they despise, and they will consider any evidence that marijuana causes harm to be vindication— while downplaying or simply ignoring evidence to the contrary.
Psychologists call this confirmation bias. We all do it. Once a belief is in place, we screen what we see and hear in a biased way that ensures our beliefs are “proven” correct. Psychologists have also discovered that people are vulnerable to something called group polarization—which means that when people who share beliefs get together in groups, they become more convinced that their beliefs are right and they become more extreme in their views. Put confirmation bias, group polarization, and culture together, and we start to understand why people can come to completely different views about which risks are frightening and which aren’t worth a second thought.
But that’s not the end of psychology’s role in understanding risk. Far from it. The real starting point for understanding why we worry and why we don’t is the individual human brain.
Four decades ago, scientists knew little about how humans perceived risks, how we judged which risks to fear and which to ignore, and how we decided what to do about them. But in the 1960s, pioneers like Paul Slovic, today a professor at the University of Oregon, set to work. They made startling discoveries, and over the ensuing decades, a new body of science grew. The implications of this new science were enormous for a whole range of different fields. In 2002, one of the major figures in this research, Daniel Kahneman, won the Nobel Prize in economics, even though Kahneman is a psychologist who never took so much as a single class in economics.
What the psychologists discovered is that a very old idea is right. Every human brain has not one but two systems of thought. They called them System One and System Two. The ancient Greeks—who arrived at this conception of humanity a little earlier than scientists—personified the two systems in the form of the gods Dionysus and Apollo. We know them better as Feeling and Reason.
System Two is Reason. It works slowly. It examines evidence. It calculates and considers. When Reason makes a decision, it’s easy to put into words and explain.
System One—Feeling—is entirely different. Unlike Reason, it works without our conscious awareness and is as fast as lightning. Feeling is the source of the snap judgments that we experience as a hunch or an intuition or as emotions like unease, worry, or fear. A decision that comes from Feeling is hard or even impossible to explain in words. You don’t know why you feel the way you do, you just do.
System One works as quickly as it does because it uses built-in rules of thumb and automatic settings. Say you’re about to take a walk at midday in Los Angeles. You may think, “What’s the risk? Am I safe?” Instantly, your brain will seek to retrieve examples of other people being attacked, robbed, or murdered in similar circumstances. If it comes up with one or more examples easily, System One will sound the alarm: The risk is high! Be afraid! And you will be. You won’t know why, really, because System One’s operations are unconscious. You’ll just have an uneasy feeling that taking a walk is dangerous—a feeling you would have trouble explaining to someone else.
What System One did is apply a simple rule of thumb: If examples of something can be recalled easily, that thing must be common. Psychologists call this the “availability heuristic.”
Obviously, System One is both brilliant and flawed. It is brilliant because the simple rules of thumb System One uses allow it to assess a situation and render a judgment in an instant—which is exactly what you need when you see a shadow move at the back of an alley and you don’t have the latest crime statistics handy. But System One is also flawed because the same rules of thumb can generate irrational conclusions.