If the anterior cingulate is extensively damaged, the result is the full picture of akinetic mutism; unlike Jason, the patient is in a permanent twilight state, not interacting with anyone under any circumstances. But what if the damage to the anterior cingulate is more subtle—say, the visual pathway to the anterior cingulate is damaged selectively at some stage, but the auditory pathway is fine. The result is telephone syndrome: Jason springs to action (speaking metaphorically!) when chatting on the phone but lapses into akinetic mutism when his father walks into the room. Except when he is on the telephone, Jason is no longer a person.
I am not making this distinction arbitrarily. Although Jason’s visuomotor system can still track and automatically attend to objects in space, he cannot recognize or attribute meaning to what he sees. Except when he is on the phone with his father, Jason lacks the ability to form rich, meaningful metarepresentations, which are essential to not only our uniqueness as a species but also our uniqueness as individuals and our sense of self.
Why is Jason a person when he is on the phone but not otherwise? Very early in evolution the brain developed the ability to create first-order sensory representations of external objects that could elicit only a very limited number of reactions. For example a rat’s brain has only a first-order representation of a cat—specifically, as a furry, moving thing to avoid reflexively. But as the human brain evolved further, there emerged a second brain—a set of nerve connections, to be exact—that was in a sense parasitic on the old one. This second brain creates metarepresentations (representations of representations—a higher order of abstraction) by processing the information from the first brain into manageable chunks that can be used for a wider repertoire of more sophisticated responses, including language and symbolic thought. This is why, instead of just “the furry enemy” that it is for the rat, the cat appears to you as a mammal, a predator, a pet, an enemy of dogs and rats, a thing that has ears, whiskers, a long tail, and a meow; it even reminds you of Halle Berry in a latex suit. It also has a name, “cat,” symbolizing the whole cloud of associations. In short, the second brain imbues an object with meaning, creating a metarepresentation that allows you to be consciously aware of a cat in a way that the rat isn’t.
Metarepresentations are also a prerequisite for our values, beliefs, and priorities. For example, a first-order representation of disgust is a visceral “avoid it” reaction, while a metarepresentation would include, among other things, the social disgust you feel toward something you consider morally wrong or ethically inappropriate. Such higher-order representations can be juggled around in your mind in a manner that is unique to humans. They are linked to our sense of self and enable us to find meaning in the outside world—both material and social—and allow us to define ourselves in relation to it. For example, I can say, “I find her attitude toward emptying the cat litter box disgusting.”
The visual Jason is essentially dead and gone as a person, because his ability to have metarepresentations of what he sees is compromised.1 But the auditory Jason lives on; his metarepresentations of his father, his self, and their life together are largely intact as activated via the auditory channels of his brain. Intriguingly, the hearing Jason is temporarily switched off when Mr. Murdoch appears in person to talk to his son. Perhaps because the human brain emphasizes visual processing, the visual Jason stifles his auditory twin.
Jason presents a striking case of a fragmented self. Some of the “pieces” of Jason have been destroyed, yet others have been preserved and retain a surprising degree of functionality. Is Jason still Jason if he can be broken into fragments? As we shall see, a variety of neurological conditions show us that the self is not the monolithic entity it believes itself to be. This conclusion flies directly in the face of some of our most deep-seated intuitions about ourselves—but data are data. What the neurology tells us is that the self consists of many components, and the notion of one unitary self may well be an illusion.
SOMETIME IN THE twenty-first century, science will confront one of its last great mysteries: the nature of the self. That lump of flesh in your cranial vault not only generates an “objective” account of the outside world but also directly experiences an internal world—a rich mental life of sensations, meanings, and feelings. Most mysteriously, your brain also turns its view back on itself to generate your sense of self-awareness.
The search for the self—and the solutions to its many mysteries—is hardly a new pursuit. This area of study has traditionally been the preserve of philosophers, and it is fair to say that on the whole they haven’t made a lot of progress (though not for want of effort; they have been at it for two thousand years). Nonetheless, philosophy has been extremely useful in maintaining semantic hygiene and emphasizing the need for clarity in terminology.2 For example, people often use the word “consciousness” loosely to refer to two different things. One is qualia—the immediate experiential qualities of sensation, such as the redness of red or the pungency of curry—and the second is the self who experiences these sensations. Qualia are vexing to philosophers and scientists alike because even though they are palpably real and seem to lie at the very core of mental experience, physical and computational theories about brain function are utterly silent on the question of how they might arise or why they might exist.
Let me illustrate the problem with a thought experiment. Imagine an intellectually highly advanced but color-blind Martian scientist who sets out to understand what humans mean when they talk about color. With his Star Trek–level technology he studies your brain and completely figures out down to every last detail what happens when you have mental experiences involving the color red. At the end of his study he can account for every physicochemical and neurocomputational event that occurs when you see red, think of red, or say “red.” Now ask yourself: Does this account encompass everything there is to the ability to see and think about redness? Can the color-blind Martian now rest assured that he understands your alien mode of visual experience even though his brain is not wired to respond to that particular wavelength of electromagnetic radiation? Most people would say no. Most would say that no matter how detailed and accurate this outside-objective description of color cognition might be, it has a gaping hole at its center because it leaves out the quale of redness. (“Quale,” pronounced “kwah-lee,” is the singular form of “qualia.”) Indeed, there is no way you can convey the ineffable quality of redness to someone else short of hooking up your brain directly to that person’s brain.
Perhaps science will eventually stumble on some unexpected method or framework for dealing with qualia empirically and rationally, but such advances could easily be as remote from our present-day grasp as molecular genetics was to those living in the Middle Ages. Unless there is a potential Einstein of neurology lurking around somewhere.
I suggested that qualia and self are different. Yet you can’t solve the former without the latter. The notion of qualia without a self experiencing/introspecting on them is an oxymoron. In similar vein Freud had argued that we cannot equate the self with consciousness. Our mental life, he said, is governed by the unconscious, a roiling cauldron of memories, associations, reflexes, motives, and drives. Your “conscious life” is an elaborate after-the-fact rationalization of things you really do for other reasons. Because technology had not yet advanced sufficiently to allow observation of the brain, Freud lacked the tools to take his ideas beyond the couch, and so his theories were caught in the doldrums between true science and untethered rhetoric.3