Выбрать главу

The illusion that Searle hopes to induce in readers (naturally he doesn’t think of it as an illusion!) depends on his managing to make readers overlook a tremendous difference in complexity between two systems at different conceptual levels. Once he has done that, the rest is a piece of cake. At the outset, the reader is invited to identify with Searle as he hand-simulates an existing AI program that can, in a limited way answer questions of a limited sort, in a few limited domains. Now, for person to hand-simulate this, or any currently existing AI program—that is, to step through it at the level of detail that the computer does—would involve days, if not weeks or months, of arduous, horrendous boredom. But instead of pointing this out, Searle—as deft at distracting the reader’ attention as a practiced magician—switches the reader’s image to a hypothetical program that passes the Turing test! He has jumped up many levels of competency without so much as a passing mention. The reader is again invited to put himself or herself in the shoes of the person carrying out the step-by-step simulation, and to “feel the lack of understanding” of Chinese. This is the crux of Searle’s argument.

Our response to this (and, as we shall show later, Searle’s response as well, in a way) is basically the “Systems Reply”: that it is a mistake to try to impute the understanding to the (incidentally) animate simulator; rather it belongs to the system as a whole, which includes what Searle casually characterizes as “bits of paper.” This offhand comment, we feel, reveals how Searle’s image has blinded him to the realities of the situation. A thinking computer is as repugnant to John Searle as no Euclidean geometry was to its unwitting discoverer, Gerolamo Saccher who thoroughly disowned his own creation. The time—the late 1700s was not quite ripe for people to accept the conceptual expansion caused by alternate geometries. About fifty years later, however, non-Euclidean geometry was rediscovered and slowly accepted.

Perhaps the same will happen with “artificial intentionality”—if it ever created. If there ever came to be a program that could pass the Turing test, it seems that Searle, instead of marveling at the power and depth of that program, would just keep on insisting that it lacked some marvelous “causal powers of the brain” (whatever they are). To point out the vacuity of that notion, Zenon Pylyshyn, in his reply to Searle, wondered if the following passage, quite reminiscent of Zuboff’s “Story of Brain” (selection 12), would accurately characterize Searle’s viewpoint:

If more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. What we outside observers might take to be words would become for you just certain noises that circuits caused you to make.

The weakness of Searle’s position is that he offers no clear way to tell when genuine meaning—or indeed the genuine “you”—has vanished from this system. He merely insists that some systems have intentionality by virtue of their “causal powers” and that some don’t. He vacillates about what those powers are due to. Sometimes it seems that the brain is composed of “the right stuff,” but other times it seems to be something else. It is whatever seems convenient at the moment—now it is the slippery essence that distinguishes “form” from “content,” now another essence that separates syntax from semantics, and so on.

To the Systems-Reply advocates, Searle offers the thought that the human being in the room (whom we shall from now on refer to as “Searle’s demon”) should simply memorize, or incorporate all the material on the “bits of paper.” As if a human being could, by any conceivable stretch of the imagination, do this. The program on those “bits of paper” embodies the entire mind and character of something as complex in its ability to respond to written material as a human being is, by virtue of being able to pass the Turing test. Could any human being simply “swallow up” the entire description of another human being’s mind? We find it hard enough to memorize a written paragraph; but Searle envisions the demon as having absorbed what in all likelihood would amount to millions, if not billions, of pages densely covered with abstract symbols—and moreover having all of this information available, whenever needed, with no retrieval problems. This unlikely aspect of the scenario is all lightly described, and it is not part of Searle’s key argument to convince the reader that it makes sense. In fact, quite the contrary a key part of his argument is in glossing over these questions of orders of magnitude, for otherwise a skeptical reader will realize that nearly all of the understanding must lie in the billions of symbols on paper, and practically none of it in the demon. The fact that the demon is animate is an irrelevant—indeed, misleading—side issue that Searle has mistaken for a very significant fact.

We can back up this argument by exhibiting Searle’s own espousal of the Systems Reply. To do so, we should first like to place Searle’s thought experiment in a broader context. In particular, we would like to show how Searle’s setup is just one of a large family of related thought experiments, several of which are the topics of other selections in this book. Each member of this family of thought experiments is defined by a particular choice of “knob settings” on a thought-experiment generator. Its purpose is to create—in your mind’s eye—various sorts of imaginary simulations of human mental activity. Each different thought experiment is an “intuition pump” (Dennett’s term) that magnifies one facet or other of the issue, tending to push the reader toward certain conclusions. We see approximately five knobs of interest, although it is possible that someone else could come up with more.

Knob 1. This knob controls the physical “stuff” out of which the simulation be constructed. Its settings include: neurons and chemicals; water pipes and water; bits of paper and symbols on them; toilet paper and stones; data structures and procedures; and so on.

Knob 2. This knob controls the level of accuracy with which the simulation attempts to mimic the human brain. It can be set at an arbitrarily fine level of detail (particles inside atoms), at a coarser level such as that of cells and synapses, or even at the level that AI researchers and cognitive psychologists deal with: that of concepts and ideas, representations and processes.

Knob 3. This knob controls the physical size of the simulation. Our assumption is that microminiaturization would allow us to make a teeny-weeny network of water pipes or solid-state chips that would fit inside a thimble, and conversely that any chemical process could be blown up to the macroscopic scale.

Knob 4. This critical knob controls the size and nature of the demon wh carries out the simulation. If it is a normal-sized human being, we shall call it a “Searle’s demon.” If it is a tiny elflike creature that can sit inside neurons or on particles, then we shall call it a “Haugeland’s demon,” after John Haugeland, whose response to Searle featured this notion, The settings of this knob also determine whether the demon is animate or inanimate.

Knob 5. This knob controls the speed at which the demon works. It can be set to make the demon work blindingly fast (millions of operations per microsecond) or agonizingly slowly (maybe one operation every few seconds).