A bit reluctantly, Ralph signaled his friend Vulcan. When Wagstaff had set this meeting up, Vulcan had predicted that it was a trap. Ralph hated to admit that Vulcan had been right.
“Vulcan here” came the staticky response. Already it was hard for Ralph to follow the words. “Vulcan here. I’m monitoring you. Get ready to merge, buddy. I’ll be out for the pieces in an hour.” Ralph wanted to answer, but he couldn’t think of a thing to say.
Vulcan had insisted on taping Ralph’s core and cache memories before he went out for the meeting. Once Vulcan put the hardware back together, he’d be able to program Ralph just as he was before his trip to the Maskeleyne Crater.
So in one sense Ralph would survive this. But in another sense he would not. In three minutes he would—insofar as the word means anything—die. The reconstructed Ralph Numbers would not remember the argument with Wagstaff or the climb out of Maskaleyne Crater. Of course the reconstructed Ralph Numbers would again be equipped with a self-symbol and a feeling of personal consciousness. But would the consciousness really be the same? Two minutes.
The gates and switches in Ralph’s sensory system were going. His inputs flared, sputtered, and died. No more light, no more weight. But deep in his cache memory, he still held a picture of himself, a memory of who he was… the self-symbol. He was a big metal box resting on caterpillar treads, a box with five arms and a sensory head on a long and flexible neck. He was Ralph Numbers, who had set the boppers free. One minute.
This had never happened to him before. Never like this. Suddenly he remembered he had forgotten to warn Vulcan about the diggers’ plan for revolution. He tried to send a signal, but he couldn’t tell if it was transmitted.
Ralph clutched at the elusive moth of his consciousness. I am. I am me.
Some boppers said that when you died you had access to certain secrets. But no one could ever remember his death.
Just before the mercury solder-spots melted, a question came, and with it an answer … an answer Ralph had found and lost thirty-six times before.
What is this that is I?
The light is everywhere.
Reflections
The “dying” Ralph Numbers reflects that if he gets reconstructed he will “again be equipped with a self-symbol and a feeling of personal consciousness,” but the idea that these are distinct, separable gifts that a robot might receive or be denied rings false. Adding “a feeling of personal consciousness” would not be like adding taste buds or the capacity to itch when bombarded by X-rays. (In selection 20, “Is God a Taoist?” Smullyan makes a similar claim about free will.) Is there anything, in fact, answering to the name of a feeling of personal consciousness? And what does it have to do with having a “self-symbol”? What good is a self-symbol, after all? What would it do? In “Prelude, Ant Fugue” (selection 11), Hofstadter develops the idea of active symbols, a far cry from the idea of symbols as mere tokens to be passively moved around and then observed or appreciated by their manipulator. The difference emerges clearly when we consider a tempting but treacherous line of thought: selfhood depends on self-consciousness, which is (obviously) consciousness of self; and since consciousness of anything is a matter of something like the internal display of a representation of that thing, for one to be self-conscious, there must be a symbol—one’s self-symbol—available to display to… um… oneself. Put that way, having a self-symbol looks as pointless and futile as writing your own name on your forehead and staring into a mirror all day.
This line of thought kicks up clouds of dust and leaves one hopelessly confused, so let’s approach the problem from another angle entirely. In the Reflections on “Borges and I” we considered the possibility of seeing yourself on a TV monitor and not at first realizing that it was yourself you were seeing. In such a case you would have a representation of yourself before you—before your eyes on the TV screen, or before your consciousness, if you like—but it would not be the right sort of representation of yourself. What is the right sort? The difference between a he-symbol and a me-symbol is not a difference in spelling. (You couldn’t set everything right by doing something to your “symbol in consciousness” analogous to erasing the “h” and writing in “m”.) The distinguishing feature of a self-symbol couldn’t be what it “looked like” but the role it could play.
Could a machine have a self-symbol, or a self-concept? It is hard to say. Could a lower animal? Think of a lobster. Do we suppose it is self-conscious? It shows several important symptoms of having a self-concept. First of all, when it is hungry, whom does it feed? Itself. Second, and more important, when it is hungry it won’t eat just anything edible; it won’t, for instance, eat itself—though it could, in principle. It could tear off its own legs with its claws and devour them. But it wouldn’t be that stupid, you say, for when it felt the pain in its legs, it would know whose legs were being attacked and would stop. But why would it suppose the pain it felt was its pain? And besides, mightn’t the lobster be so stupid as not to care that the pain it was causing was its own pain?
These simple questions reveal that even a very stupid creature must be designed to behave with self-regard—to put it as neutrally as possible. Even the lowly lobster must have a nervous system wired up in such a way that it will reliably distinguish self-destructive from other-destructive behavior—and strongly favor the latter. It seems quite possible that the control structures required for such self-regarding behavior can be put together without a trace of consciousness, let alone self-consciousness. After all, we can make self-protective little robot devices that cope quite well in their simple environments and even produce an overwhelmingly strong illusion of “conscious purpose”—as illustrated in selection 8, “The Soul of the Mark III Beast.” But why say this is an illusion, rather than a rudimentary form of genuine self-consciousness—akin perhaps to the self-consciousness of a lobster or worm? Because robots don’t have the concepts? Well, do lobsters? Lobsters have something like concepts, apparently: what they have are in any event enough to govern them through their self-regarding lives. Call these things what you like, robots can have them too. Perhaps we could call them unconscious or preconscious concepts. Self-concepts of a rudimentary sort. The more varied the circumstances in which a creature can recognize itself, recognize circumstances as having a bearing on itself, acquire information about itself, and devise self-regarding actions, the richer (and more valuable) its self-conception—in this sense of “concept” that does not presuppose consciousness.
Suppose, to continue this thought experiment, we wish to provide our self-protective robot with some verbal ability, so it can perform the range of self-regarding actions language makes available—such as asking for help or for information, but also telling lies, issuing threats, and making promises. Organizing and controlling this behavior will surely require an even more sophisticated control structure: a representational system in the sense defined earlier, in the Reflections on “Prelude, Ant Fugue.” It will be one that not only updates information about the environment and the current location of the robot in it, but also has information about the other actors in the environment and what they are apt to know and want, what they can understand. Recall Ralph Numbers’s surmises about the motives and beliefs of Wagstaff.