The Anchoring Rule can also be used to skew public opinion surveys to suit one’s purposes. Say you’re the head of an environmental group and you want to show that the public supports spending a considerable amount of money cleaning up a lake. You do this by conducting a survey that begins with a question about whether the respondent would be willing to contribute some money—say $200—to clean up the lake. Whether people say yes or no doesn’t matter. You’re asking this question only to get the figure $200 into people’s heads. It’s the next question that counts: You ask the respondent to estimate how much the average person would be willing to pay to clean up the lake. Thanks to the Anchoring Rule, you can be sure the respondent’s Gut will start at $200 and adjust downward, arriving at a figure that will still be higher than it would have been if that figure hadn’t been handed to Gut. In a study that did precisely this, psychologists Daniel Kahneman and Jack Knetsch found that the average guess about how much people would be willing to pay to clean up the lake was $36. But in a second trial, the $200 figure was replaced with $25. When people were then asked how much others would be willing to pay to clean up the lake, the average guess was a mere $14. Thus a high anchoring number produced an average answer almost 150 percent greater than a low number.
By now, the value of the Anchoring Rule to someone marketing fear should be obvious. Imagine that you are, say, selling software that monitors computer usage. Your main market is employers trying to stop employees from surfing the Internet on company time. But then you hear a news story about pedophiles luring kids in chat rooms and you see that this scares the hell out of parents. So you do a quick Google search and look for the biggest, scariest statistic you can find—50,000 pedophiles on the Internet at any given moment—and you put it in your marketing. Naturally, you don’t question the accuracy of the number. That’s not your business. You’re selling software.
And you’re probably going to sell a lot of it, thanks to the determined efforts of many other people. After all, you’re not the only one trying to alarm parents—or alert them, as some would prefer to say. There are the child-protection activists and NGOs, police officers, politicians, and journalists. They’re all out there waving the same scary number—and others like it—because, just like you, that scary number advances their goals and they haven’t bothered to find out if it is made of anything more than dark fantasy.
Intelligent parents may be suspicious, however. Whether they hear this number from you or some other interested party, they may think this is a scare tactic. They won’t buy it.
But the delightful thing—delightful from your perspective—is that their doubt won’t matter. Online stalking does happen, after all. And even the skeptical parent who dismisses the 50,000 number will find herself thinking, well, what is the right answer? How many pedophiles are on the Internet? Almost instantly, she will have a plausible answer. That’s Gut’s work. And the basis for Gut’s judgment was the Anchoring Rule: Start with the number heard most recently and adjust downward.
Downward to what? Let’s say she cut the number pretty dramatically and settled on 10,000. Reason dictates that if the 50,000 figure is nonsense, then a number derived by arbitrarily adjusting that nonsense figure downward is nonsense squared. The 10,000 figure is totally meaningless and it should be dismissed.
The parent probably won’t do that, however. To her, the 10,000 figure will feel right for reasons she wouldn’t be able to explain if she were asked to. Not even her skepticism about corporate marketing and bad journalism will protect her because, in her mind, this number didn’t come from marketers or journalists. It came from her. It’s what she feels is true. And for a parent, the thought of 10,000 pedophiles hunting children online at each and every moment is pretty damned scary.
Congratulations. You have a new customer.
The Anchoring Rule, as influential as it is, is only a small part of a much wider scientific breakthrough with vast implications. As always in science, there are many authors and origins of this burgeoning field, but two who stand out are psychologists Daniel Kahneman and Amos Tversky.
Four decades ago, Kahneman and Tversky collaborated on research that looked at how people form judgments when they’re uncertain of the facts. That may sound like a modest little backwater of academic work, but it is actually one of the most basic aspects of how people think and act. For academics, it shapes the answers to core questions in fields as diverse as economics, law, health, and public policy. For everyone else, it’s the stuff of daily life: what jobs we take; who we marry; where we live; whether we have children, and how many. It’s also crucial in determining how we perceive and respond to the endless list of threats—from choking on toast to the daily commute to terrorist attacks—that could kill us.
When Kahneman and Tversky began their work, the dominant model of how people make decisions was that of Homo economicus. “Economic man” is supremely rational. He examines evidence. He calculates what would best advance his interests as he understands them, and he acts accordingly. The Homo economicus model ruled economics departments and was hugely influential in public policy circles as well, in part because it suggested that influencing human behavior was actually rather simple. To fight crime, for example, politicians need only make punishments tougher. When the potential costs of crime outweigh the potential benefits, would-be criminals would calculate that the crime no longer advanced their interests and they would not commit it.
“For every problem there is a solution that is simple, clean, and wrong,” wrote H. L. Mencken, and the Homo economicus model is all that. Unlike Homo economicus, Homo sapiens is not perfectly rational. Proof of that lies not in the fact that humans occasionally make mistakes. The Homo economicus model allows for that. It’s just that in certain circumstances, people always make mistakes. We are systematically flawed. In 1957, Herbert Simon, a brilliant psychologist/economist/political scientist and future Nobel laureate, coined the term bounded rationality. We are rational, in other words, but only within limits.
Kahneman and Tversky set themselves the task of discovering those limits. In 1974, they gathered together several years’ work and wrote a paper with the impressively dull title of “Judgment Under Uncertainty: Heuristics and Biases.” They published it in Science, rather than a specialist journal, because they thought some of the insights might be interesting to non-psychologists. Their little paper caught the attention of philosophers and economists and a furious debate began. It lasted for decades, but Kahneman and Tversky ultimately prevailed. The idea of “bounded rationality” is now widely accepted, and its insights are fueling research throughout the social sciences. Even economists are increasingly accepting that Homo sapiens is not Homo economicus, and a dynamic new field called “behavioral economics” is devoted to bringing the insights of psychology to economics.