Выбрать главу

Not surprisingly, a poll taken by the Harvard School of Public Health in 2002 found that Americans grossly overestimated the danger of the virus. “Of people who get sick from the West Nile virus,” the survey asked, “about how many do you think die of the disease?” There were five possible answers: “Almost None,” “About One in 10,” “About One in 4,” “More Than Half,” and “Don’t Know.” Fourteen percent answered “Almost None.” The same number said more than half, while 18 percent chose one in four and 45 percent said one in ten.

Call it “denominator blindness.” The media routinely tell people "X people were killed” but they rarely say “out of Y population.” The X is the numerator, Y is the denominator. To get a basic sense of the risk, we have to divide the numerator by the denominator—so being blind to the denominator means we are blind to the real risk. An editorial in The Times of London is a case in point. The newspaper had found that the number of Britons murdered by strangers had “increased by a third in eight years.” That meant, it noted in the fourth paragraph, that the total had increased from 99 to 130. Most people would find that at least a little scary. Certainly the editorial writers did. But what the editorial did not say is that there are roughly 60 million Britons, and so the chance of being murdered by a stranger rose from 99 in 60 million to 130 in 60 million. Do the math and the risk is revealed to have risen from an almost invisible 0.0001 percent to an almost invisible 0.00015 percent.

An even simpler way to put a risk in perspective is to compare it to other risks, as I did earlier by putting the death toll of West Nile virus alongside that of choking on food. But Roche and Muskavitch found that a mere 3 percent of newspaper articles that cited the death toll of West Nile gave a similar figure for other risks. That’s typical of reporting on all sorts of risks. A joint survey of British and Swedish newspapers published in the journal Public Understanding of Science found a small minority of Swedish articles did compare risks, but "in the U.K. there were almost no comparisons of this nature”—even though the survey covered a two-month period that included the tenth anniversary of the Chernobyl disaster and the peak of the panic over BSE (mad cow disease). Readers needed perspective but journalists did not provide it.

Another common failure was illustrated in the stories reporting on a September 2006 announcement by the U.S. Food and Drug Administration that it was requiring the product-information sheet for the Ortho Evra birth-control patch to be updated with a new warning to include the results of a study that found that—in the words of one newspaper article—“women who use the patch were twice as likely to have blood clots in their legs or lungs than those who used oral contraceptives.” In newspapers across North America, even in the New York Times, that was the only information readers got. “Twice the risk” sounds big, but what does it actually mean? If the chance of something horrible happening is one in eight, a doubling of the risk makes it one in four: Red alert! But if the risk of a jet crashing onto my desk were to double, I still wouldn’t be concerned because two times almost-zero is still almost-zero. An Associated Press story included the information readers needed to make sense of this story: “The risk of clots in women using either the patch or pill is small,” the article noted. “Even if it doubled for those on the patch, perhaps just six women out of 10,000 would develop clots in any given year, said Daniel Shames, of the FDA’s Center for Drug Evaluation and Research.” The AP story was carried widely across North America but many newspapers that ran it, including the New York Times, actually cut that crucial sentence.

Risks can be described in either of two ways. One is “relative risk,” which is simply how much bigger or smaller a risk is relative to something else. In the birth-control patch story, “twice the risk”—women who use the patch have twice the risk of those who don’t—is the relative risk. Then there’s “absolute risk,” which is simply the probability of something happening. In the patch story, 6 in 10,000 is the absolute risk. Both ways of thinking about risk have their uses, but the media routinely give readers the relative risk alone. And that can be extremely misleading.

When the medical journal The Lancet published a paper that surveyed the research on cannabis and mental illness, newspapers in Britain—where the issue has a much higher profile than elsewhere—ran frightening headlines such as this one from the Daily Mail: “Smoking just one cannabis joint raises danger of mental illness by 40 percent.” Although overdramatized with the “just one cannabis joint” phrasing, this was indeed what the researchers had found. Light users of cannabis had a 40 percent greater risk of psychosis than those who had never smoked the drug, while regular users were found to be at 50 to 200 percent greater risk. But there were two problems here. The first—which the Daily Mail and other newspapers noted in a few sentences buried in the depths of their stories—is that the research did not show cannabisuse causes mental illness, only that cannabis use and mental illness are statistically associated, which means cannabis may cause mental illness or the association may be the result of something else entirely. The second is that the “40 percent” figure is the relative risk. To really understand the danger, people needed to know the absolute risk—but no newspaper provided that. An Agence France-Press report came closest to providing that crucial information: “The report stresses that the risk of schizophrenia and other chronic psychotic disorders, even in people who use cannabis regularly, is statistically low, with a less than one-in-33 possibility in the course of a lifetime. ” That’s enough to work out the basic figures: Someone who never uses cannabis faces a lifetime risk of around 1 percent; a light user’s risk is about 1.4 percent; and a regular user’s risk is between 1.5 and 3 percent. These are significant numbers, but they’re not nearly as scary as those that appeared in the media.

Why do journalists so often provide information about risks that is misleading and unduly frightening? The standard explanation for media hype is plain old self-interest. Like corporations, politicians, and activists, the media profit from fear. Fear means more newspapers sold and higher ratings, so the dramatic, the frightening, the emotional, and the worst case are brought to the fore while anything that would suggest the truth is not so exciting and alarming is played down or ignored entirely.