Выбрать главу

Life and death are somewhat more emotional matters than lean and fat beef, so it’s not surprising that the words a doctor chooses can be even more influential than those used in Levin and Gaeth’s experiment. A 1982 experiment by Amos Tversky and Barbara McNeil demonstrated this by asking people to imagine they were patients with lung cancer who had to decide between radiation treatment and surgery. One group was told there was a 68 percent chance of being alive a year after the surgery. The other was told there was a 32 percent chance of dying. Framing the decision in terms of staying alive resulted in 44 percent opting for surgery over radiation treatment, but when the information was framed as a chance of dying, that dropped to 18 percent. Tversky and McNeil repeated this experiment with physicians and got the same results. In a different experiment, Tversky and Daniel Kahneman also showed that when people were told a flu outbreak was expected to kill 600 people, people’s judgments about which program should be implemented to deal with the outbreak were heavily influenced by whether the expected program results were described in terms of lives saved (200) or lives lost (400).

The vividness of language is also critical. In one experiment, Cass Sunstein—a University of Chicago law professor who often applies psychology’s insights to issues in law and public policy—asked students what they would pay to insure against a risk. For one group, the risk was described as “dying of cancer.” Others were told not only that the risk was death by cancer but that the death would be “very gruesome and intensely painful, as the cancer eats away at the internal organs of the body.” That change in language was found to have a major impact on what students were willing to pay for insurance—an impact that was even greater than making a large change in the probability of the feared outcome. Feeling trumped numbers. It usually does.

Of course, the most vivid form of communication is the photographic image, and, not surprisingly, there’s plenty of evidence that awful, frightening photos not only grab our attention and stick in our memories—which makes them influential via the Example Rule—they conjure emotions that influence our risk perceptions via the Good-Bad Rule. It’s one thing to tell smokers their habit could give them lung cancer. It’s quite another to see the blackened, gnarled lungs of a dead smoker. That’s why several countries, including Canada and Australia, have replaced text-only health warnings on cigarette packs with horrible images of diseased lungs, hearts, and gums. They’re not just repulsive. They increase the perception of risk.

Even subtle changes in language can have considerable impact. Paul Slovic and his team gave forensic psychiatrists—men and women trained in math and science—what they were told was another clinician’s assessment of a mental patient confined to an institution. Based on this assessment, the psychiatrists were asked, would you release this patient? Half the assessments estimated that patients similar to Mr. Jones “have a 20 percent chance of committing an act of violence” after release. Of the psychiatrists who read this version, 21 percent said they would refuse to release the patient.

The wording of the second version of the assessment was changed very slightly. It is estimated, the assessment said, that “20 out of every 100 patients similar to Mr. Jones” will be violent after release. Of course, “20 percent” and “20 out of every 100” mean the same thing. But 41 percent of the psychiatrists who read this second version said they would keep the patient confined, so an apparently trivial change in wording boosted the refusal rate by almost 100 percent. How is that possible? The explanation lies in the emotional content of the phrase “20 percent.” It’s hollow, abstract, a mere statistic. What’s a “percent”? Can I see a “percent”? Can I touch it? No. But “20 out of every 100 patients” is very concrete and real. It invites you to see a person. And in this case, the person is committing violent acts. The inevitable result of this phrasing is that it creates images of violence—“some guy going crazy and killing someone,” as one person put it in post-experiment interviews—which make the risk feel bigger and the patient’s incarceration more necessary.

People in the business of public opinion are only too aware of the influencethat seemingly minor linguistic changes can have. Magnetic resonance imaging (MRI), for example, was originally called “nuclear magnetic resonance imaging” but the “nuclear” was dropped to avoid tainting a promising new technology with a stigmatized word. In politics, a whole industry of consultants has arisen to work on language cues like these—the Republican Party’s switch from “tax cuts” and “estate tax” to “tax relief” and “death tax” being two of its more famous fruits.

The Good-Bad Rule can also wreak havoc on our rational appreciation of probabilities. In a series of experiments conducted by Yuval Rottenstreich and Christopher Hsee, then with the Graduate School of Business at the University of Chicago, students were asked to imagine choosing between $50 cash and a chance to kiss their favorite movie star. Seventy percent said they’d take the cash. Another group of students was asked to choose between a 1 percent chance of winning $50 cash and a 1 percent chance of kissing their favorite movie star. The result was almost exactly the reverse: 65 percent chose the kiss. Rottenstreich and Hsee saw the explanation in the Good-Bad Rule: The cash carries no emotional charge, so a 1 percent chance to win $50 feels as small as it really is; but even an imagined kiss with a movie star stirs feelings that cash does not, so a 1 percent chance of such a kiss looms larger.

Rottenstreich and Hsee conducted further variations of this experiment that came to the same conclusion. Then they turned to electric shocks. Students were divided into two groups, with one group told the experiment would involve some chance of a $20 loss and the other group informed that there was a risk of “a short, painful but not dangerous shock.” Again, the cash loss is emotionally neutral. But the electric shock is truly nasty. Students were then told the chance of this bad thing happening was either 99 percent or 1 percent. So how much would you pay to avoid this risk?

When there was a 99 percent chance of losing $20, they said they would pay $18 to avoid this almost-certain loss. When the chance dropped to 1 percent, they said they would pay just one dollar to avoid the risk. Any economist would love that result. It’s a precise and calculated response to probability, perfect rationality. But the students asked to think of an electric shock did something a little different. Faced with a 99 percent chance of a shock, they said they would pay $10 to stop it. But when the risk was 1 percent, they were willing to pay $7 to protect themselves. Clearly, the probability of being zapped had almost no influence. What mattered is that the risk of being shocked is nasty—and they felt it.

Plenty of other research shows that even when we are calm, cool, and thinking carefully, we aren’t naturally inclined to look at the odds. Should I buy an extended warranty for my new giant-screen television? The first and most important question I should ask is how likely it is to break down and need repair, but research suggests there’s a good chance I won’t even think about that. And if I do, I won’t be entirely logical about it. Certainty, for example, has been shown to have outsize influence on how we judge probabilities: A change from 100 percent to 95 percent carries far more weight than a decline from 60 percent to 55 percent, while a jump from 0 percent to 5 percent will loom like a giant over a rise from 25 percent to 30 percent. This focus on certainty helps explain our unfortunate tendency to think of safety in black-and-white terms—something is either safe or unsafe—when, in reality, safety is almost always a shade of gray.