Learning a second language involves transcending one’s own native language. It involves mixing the new language right in with the medium in which thought takes place. Thoughts must be able to germinate as easily (or nearly as easily) in the new language as in one’s native language. The way in which a new language’s habits seep down level by level and finally get absorbed into neurons is a giant mystery still. But one thing for certain is that mastery of a language does not consist in getting your “English subsystem” to execute for you a program of rules that enable you to deal with a language as a set of meaningless sounds and marks. Somehow, the new language must fuse with your internal representational system—your repertoire of concepts, images, and so on—in the same intimate way as English is fused with it. To think precisely about this, one must develop a very clear notion of the concept of levels of implementation, a computer-science concept of great power.
Computer scientists are used to the idea that one system can “emulate” another system. In fact, it follows from a theorem proven in 1936 by Alan Turing that any general-purpose digital computer can take on guise of any other general-purpose digital computer, and the only difference to the outside world will be one of speed. The verb “emulate” reserved for simulations, by a computer, of another computer, while “simulate” refers to the modeling of other phenomena, such as hurricanes, population curves, national elections, or even computer users.
A major difference is that simulation is almost always approximate, depending on the nature of the model of the phenomenon in question whereas emulation is in a deep sense exact. So exact is it that when, say a Sigma-5 computer emulates a computer with different architecture—say a DEC PDP-10—the users of the machine will be unaware that they are not dealing with a genuine DEC. This embedding of one architecture in another gives rise to so-called “virtual machines”—in this case, virtual DEC-10. Underneath every virtual machine there is always some other machine. It may be a machine of the same type, it may even be another virtual machine. In his book Structured Computer Organization, Andrew Tanenbaum uses this notion of virtual machines to explain how large computer systems can be seen as a stack of virtual machines implemented one on top of the other—the bottommost one being, of course a real machine! But in any case, the levels are sealed off from each other in a watertight way, just as Searle’s demon was prevented from talking to the Chinese speaker he was part of. (It is intriguing to imagine what kind of conversation would take place—assuming that there were an interpreter present, since Searle’s demon knows no Chinese!)
Now in theory, it is possible to have any two such levels communicate with each other, but this has traditionally been considered bad style; level-mingling is forbidden. Nonetheless, it is probable that this forbidden fruit—this blurring of two implementational levels—is exactly what goes on when a human “system” learns a second language. The second language does not run on top of the first one as a kind of software parasite, but rather becomes equally fundamentally implanted in the hardware (or nearly so). Somehow, absorption of a second language, involves bringing about deep changes in one’s underlying “machine”—a vast and coherent set of changes in the ways that neurons fire, so sweeping a set of changes that it creates new ways for the higher-level entities—the symbols—to trigger one another.
To parallel this in a computer system, a higher-level program would have to have some way of creating changes inside the “demon” that is carrying its program out. This is utterly foreign to the present style in computer science of implementing one level above another in a strictly vertical, sealed-off fashion. The ability of a higher level to loop back and affect lower levels—its own underpinnings—is a kind of magic trick which we feel is very close to the core of consciousness. It will perhaps one day prove to be a key element in the push toward ever-greater flexibility in computer design, and of course in the approach toward artificial intelligence. In particular, a satisfactory answer to the question of what “understanding” really means will undoubtedly require a much sharper delineation of the ways in which different levels in a symbol-manipulating system can depend on and affect one another. All in all, these concepts have proven elusive, and a clear understanding of them is probably a good ways off yet.
In this rather confusing discussion of many levels, you may have started to wonder what in the world “level” really means. It is a most difficult question. As long as levels are sealed off from each other, like Searle’s demon and the Chinese-speaking woman, it is fairly clear. When they begin to blur, beware! Searle may admit that there are two levels in his thought experiment, but he is reluctant to admit that there are two occupied points of view—two genuine beings that feel and “have experience.” He is worried that once we admit that some computational systems might have experiences, that would be a Pandora’s box and all of a sudden “mind would be everywhere”—in the churning of stomachs, livers, automobile engines, and so on.
Searle seems to believe that any system whatsoever can be ascribed beliefs and feelings and so on, if one looks hard enough for a way to describe the system as an instantiation of an AI program. Obviously, that would be a disturbing notion, leading the way to panpsychism. Indeed, Searle believes that the AI people have unwittingly committed themselves to a panpsychic vision of the world.
Searle’s escape from his self-made trap is to maintain that all those “beliefs” and “feelings” that you will uncover in inanimate objects and so forth when you begin seeing mind everywhere are not genuine but “pseudo.” They lack intentionality! They lack the causal powers of the brain! (Of course, Searle would caution others to beware of confusing these notions with the naïvely dualistic notion of “soul.”)
Our escape is to deny that the trap exists at all. It is incorrect to see minds everywhere. We say: minds do not lurk in car engines or livers any more than brains lurk in car engines and livers.
It is worthwhile expanding on this a little. If you can see all the complexity of thought processes in a churning stomach, then what’s to prevent you from reading the pattern of bubbles in a carbonated beverage as coding for the Chopin piano concerto in E minor? And don’t the holes in pieces of Swiss cheese code for the entire history of the United States? Sure they do—in Chinese as well as in English. After all, all things are written everywhere! Bach’s Brandenburg concerto no. 2 is coded for in the structure of Hamlet—and Hamlet was of course readable (if you’d only known the code) from the structure of the last piece of birthday cake you gobbled down.
The problem is, in all these cases, that of specifying the code without knowing in advance what you want to read. For otherwise, you could pull a description of anyone’s mental activity out of a baseball game or a blade of grass by an arbitrarily constructed a posteriori code. But this is not science.
Minds come in different grades of sophistication, surely, but minds worth calling minds exist only where sophisticated representational systems exist, and no describable mapping that remains constant in time will reveal a self-updating representational system in a car engine or a liver. Perhaps one could read mentality into a rumbling car engine in somewhat the way that people read extra meanings into the structures of the Great Pyramids or Stonehenge, the music of Bach, Shakespeare’s plays, and so on—namely, by fabricating far-fetched numerological mapping schemes that can be molded and flexed whenever needed to fit the desires of the interpreter. But we doubt that that is what Searle intends (we do grant that he intends).