Выбрать главу

There’s another big downside to the Rule of Typical Things, one that is particularly important to how we judge risks. In 1982, Kahneman and Tversky flew to Istanbul, Turkey, to attend the Second International Congress on Forecasting. This was no ordinary gathering. The participants were all experts—from universities, governments, and corporations—whose job was assessing current trends and peering into the future. If anyone could be expected to judge the chances of things happening rationally, it was this bunch.

The psychologists gave a version of the “Linda problem” to two groups, totaling 115 experts. The first group was asked to evaluate the probability of “a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.” The second group was asked how likely it was that there would be “a Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and Poland, sometime in 1983.”

Logically, the first scenario has to be more likely than the second scenario. And yet the experts’ ratings were exactly the opposite. Both scenarios were considered unlikely, but the suspension-following-invasion scenario was judged to be three times more likely than the suspension scenario. A Soviet invasion of Poland was “typical” Soviet behavior. It fit, in the same way “active feminist” fit with Linda’s profile. And that fit heavily influenced the experts’ assessment of the whole scenario.

Many other studies produced similar results. Kahneman and Tversky divided a group of 245 undergrads at the University of British Columbia in half and asked one group to estimate the probability of “a massive flood somewhere in North America in 1983, in which more than 1,000 people drown.” The second group was asked about “an earthquake in California sometime in 1983, causing a flood in which more than 1,000 people drown.” Once again, the second scenario logically has to be less likely than the first, but people rated it one-third more likely than the first. Nothing says “California” quite like “earthquake.”

As Kahneman and Tversky later wrote, the Rule of Typical Things “generally favors outcomes that make good stories or good hypotheses. The conjunction ‘feminist bank teller’ is a better hypothesis about Linda than ‘bank teller,’ and the scenario of a Russian invasion of Poland followed by a diplomatic crisis makes a better narrative than ‘diplomatic crisis.’ ” Gut is a sucker for a good story.

To see the problem with this, open any newspaper. They’re filled with experts telling stories about what will happen in the future, and these predictions have a terrible track record. Brill’s Content, a sadly defunct magazine that covered the media, had a feature that tracked the accuracy of simple, one-off predictions (“Senator Smith will win the Democratic nomination”) made by famous American pundits like George Will and Sam Donaldson. The magazine compared their results to those of a prognosticator by the name of "Chippy,” a four-year-old chimpanzee who made predictions by choosing among flash cards. Chippy was good. While the average pundit got about 50 percent of his or her predictions right—as good as a flipped coin—Chippy scored an impressive 58 percent.

Of course, pundits don’t limit their futurology to simple predictions. They often lay out elaborate scenarios explaining how Senator Smith will take the Democratic nomination and the presidential election that follows, or how unrest in Lebanon could produce a long chain reaction that will lead to war between Sunni and Shia across the Middle East, or how the Chinese refusal to devalue the currency could send one domino crashing into the next until housing prices collapse in the United States and the global economy tips into economic recession. Logically, for these predictions to come true, each and every link in the chain must happen—and given the pundits’ dismal record with simple one-off predictions, the odds of that happening are probably lower than Chippy’s chances of becoming president.

But Gut doesn’t process this information logically. Guided by the Rule of Typical Things, it latches onto plausible details and uses them to judge the likelihood of the whole scenario coming true. As a result, Kahneman and Tversky wrote, “a detailed scenario consisting of causally linked and representative events may appear more probable than a subset of those events.” Add details, pile up predictions, construct elaborate scenarios. Logic says the more you go in this direction, the less likely it is that your forecast will prove accurate. But for most people, Gut is far more persuasive than mere logic.

Kahneman and Tversky realized what this meant for expert predictions. “This effect contributes to the appeal of scenarios and the illusory insight that they often provide,” they wrote. “A political analyst can improve scenarios by adding plausible causes and representative consequences. As Pooh-Bah in The Mikado explains, such additions provide ‘corroborative details intended to give artistic verisimilitude to an otherwise bald and unconvincing narrative.’ ”

Does this matter? In most cases, no. Much of the pundits’ futurology may be as inaccurate as the horoscopes that appear on a different page of the newspaper, but it is no more important. Occasionally, though, what opinion leaders are saying about the future does matter—as it did in the months prior to the 2003 invasion of Iraq—and in those moments Gut’s vulnerability to a well-told tale can have very serious consequences.

THE EXAMPLE RULE

When a roulette wheel spins and the ball drops, the outcome is entirely random. On any spin, the ball could land on any number, black or red. The odds never change.

Tectonic plates are not roulette wheels and earthquakes are not random events. Heat generated by the earth’s core relentlessly pushes the plates toward the surface. The motion of the plates—grinding against each other— is stopped by friction, so the pressure from below steadily grows until the plates shudder and lurch forward in the violent moment we experience as an earthquake. With the pressure released, the violence stops and the cycle begins again.

For those whose bedrooms are perched atop one of the unfortunate places where tectonic plates meet, these simple facts say something important about the risks being faced. Most important, the risk varies. Unlike the roulette wheel, the chances of an earthquake happening are not the same at all times. They are lowest immediately after an earthquake has happened. They rise as time passes and the pressure builds. And while scientists may not be able to precisely predict when an earthquake is about to happen— not yet, anyway—they do have a pretty good ability to track the rising risk.

Knowing this, there should be an equally clear pattern in sales of earthquake insurance. Since the lowest risk is immediately after an earthquake, that’s when sales should be lowest. As time passes, sales should rise. When scientists start warning about the Big One, sales should soar. But earthquake insurance sales actually follow exactly the opposite pattern. They are highest immediately after an earthquake and they fall steadily as time passes. Now, the first part of that is understandable. Experiencing an earthquake is a frightening way to be reminded that, yes, your house could be flattened. But it’s strange that people let their insurance lapse as time passes. And it’s downright bizarre that people don’t rush to get insurance when scientists issue warnings.