Things were about to get stranger still. Distinguished psychiatrists and scientists began to suggest, with considerable enthusiasm, that the program could play a valuable role in actually treating the ill and the disturbed. In an article in the Journal of Nervous and Mental Disease, three prominent research psychiatrists
wrote that ELIZA, with a bit of tweaking, could be "a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists." Thanks to the "time-sharing capabilities of modern and future computers, several hundred patients an hour could be handled by a computer system designed for this purpose." Writing in Natural History, the prominent astrophysicist Carl Sagan expressed equal excitement about ELIZA's potential. He foresaw the development of "a network of computer therapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we would be able to talk with an attentive, tested, and largely non-directive psychotherapist."
In his paper "Computing Machinery and Intelligence," Alan Turing[56] had grappled with the question "Can machines think?" He proposed a simple experiment for judging whether a computer could be said to be intelligent, which he called "the imitation game" but which soon came to be known as the Turing test. It involved having a person, the "interrogator," sit at a computer terminal in an otherwise empty room and engage in a typed conversation with two other people, one an actual person and the other a computer pretending to be a person. If the interrogator was unable to distinguish the computer from the real person, then the computer, argued Turing, could be considered intelligent. The ability to conjure a plausible self out of words would signal the arrival of a true thinking machine.
To converse with ELIZA was to engage in a variation on the Turing test. But, as 10 Weizenbaum was astonished to discover, the people who "talked" with his program had little interest in making rational, objective judgments about the identity of ELIZA. They wanted to believe that ELIZA was a thinking machine. They wanted to imbue ELIZA with human qualities—even when they were well aware that ELIZA was nothing more than a computer program following simple and rather obvious instructions. The Turing test, it turned out, was as much a test of the way human beings think as of the way machines think. In their Journal of Nervous and Mental Disease article, the three psychiatrists hadn't just suggested that ELIZA could serve as a substitute for a real therapist They went on to argue, in circular fashion, that a psychotherapist was in essence a kind of computer: "A human therapist can be viewed as an information processor and decision maker with a set of decision rules which are closely linked to short-range and long-range goals." In simulating a human being, however clumsily, ELIZA encouraged human beings to think of themselves as simulations of computers.
The reaction to the software unnerved Weizenbaum. It planted in his mind a question he had never before asked himself but that would preoccupy him for many years: "What is it about the computer that has brought the view of man as a machine to a new level of plausibility?" In 1976, a decade after ELIZA's debut, he provided
an answer in his book Computer Power and Human Reason. To understand the effects of a computer, he argued, you had to see the machine in the context of mankind's past intellectual technologies, the long succession of tools that, like the map and the clock, transformed nature and altered "man's perception of reality." Such technologies become part of "the very stuff out of which man builds his world." Once adopted, they can never be abandoned, at least not without plunging society into "great confusion and possibly utter chaos." An intellectual technology, he wrote, "becomes an indispensable component of any structure once it is so thoroughly integrated with the structure, so enmeshed in various vital substructures, that it can no longer be factored out without fatally impairing the whole structure."
That fact, almost "a tautology," helps explain how our dependence on digital computers grew steadily and seemingly inexorably after the machines were invented at the end of the Second World War. "The computer was not a prerequisite to the survival of modern society in the post-war period and beyond," Weizenbaum argued; "its enthusiastic, uncritical embrace by the most 'progressive' elements of American government, business, and industry made it a resource essential to society's survival in the form that the computer itself had been instrumental in shaping." He knew from his experience with time-sharing networks that the role of computers would expand beyond the automation of governmental and industrial processes. Computers would come to mediate the activities that define people's everyday lives—how they learn, how they think, how they socialize. What the history of intellectual technologies shows us, he warned, is that "the introduction of computers into some complex human activities may constitute an irreversible commitment." Our intellectual and social lives may, like our industrial routines, come to reflect the form that the computer imposes on them.
What makes us most human, Weizenbaum had come to believe, is what is least computable about us—the connections between our mind and our body, the experiences that shape our memory and our thinking, our capacity for emotion and empathy. The great danger we face as we become more intimately involved with our computers—as we come to experience more of our lives through the disembodied symbols flickering across our screens—is that we'll begin to lose our humanness, to sacrifice the very qualities that separate us from machines. The only way to avoid that fate, Weizenbaum wrote, is to have the self-awareness and the courage to refuse to delegate to computers the most human of our mental activities and intellectual pursuits, particularly "tasks that demand wisdom."
In addition to being a learned treatise on the workings of computers and software, Weizenbaum's book was a cri de coeur,[57] a computer programmer's passionate and at times self-righteous examination of the limits of his profession. The book did not endear the author to his peers. After it came out, Weizenbaum was spurned as a heretic by leading computer scientists, particularly those pursuing artificial intelligence. John McCarthy, one of the organizers of the original Dartmouth AI conference, spoke
for many technologists when, in a mocking review, he dismissed Computer Power and Human Reason as "an unreasonable book" and scolded Weizenbaum for unscientific "moralizing." Outside the data-processing field, the book caused only a brief stir. It appeared just as the first personal computers were making the leap from hobbyists' workbenches to mass production. The public, primed for the start of a buying spree that would put computers into most every office, home, and school in the land, was in no mood to entertain an apostate's doubts.
When a carpenter picks up a hammer, the hammer becomes, so far as his brain is 15 concerned, part of his hand. When a soldier raises a pair of binoculars to his face, his brain sees through a new set of eyes, adapting instantaneously to a very different field of view. The experiments on pliers-wielding monkeys revealed how readily the plastic primate brain can incorporate tools into its sensory maps, making the artificial feel natural. In the human brain, that capacity has advanced far beyond what's seen in even our closest primate cousins. Our ability to meld with all manner of tools is one of the qualities that most distinguishes us as a species. In combination with our superior cognitive skills, it's what makes us so good at using new technologies. It's also what makes us so good at inventing them. Our brains can imagine the mechanics and the benefits of using a new device before that device even exists. The evolution of our extraordinary mental capacity to blur the boundary between the internal and the external, the body and the instrument, was, says University of Oregon neuroscientist Scott Frey, "no doubt a fundamental step in the development of technology."