Выбрать главу

There is certainly disagreement on when or even whether we will encounter such a nonbiological entity. My own consistent prediction is that this will first take place in 2029 and become routine in the 2030s. But putting the time frame aside, I believe that we will eventually come to regard such entities as conscious. Consider how we already treat them when we are exposed to them as characters in stories and movies: R2D2 from the Star Wars movies, David and Teddy from the movie A.I., Data from the TV series Star Trek: The Next Generation, Johnny 5 from the movie Short Circuit, WALL-E from Disney’s movie Wall-E, T-800—the (good) Terminator—in the second and later Terminator movies, Rachael the Replicant from the movie Blade Runner (who, by the way, is not aware that she is not human), Bumblebee from the movie, TV, and comic series Transformers, and Sonny from the movie I, Robot. We do empathize with these characters even though we know that they are nonbiological. We regard them as conscious persons, just as we do biological human characters. We share their feelings and fear for them when they get into trouble. If that is how we treat fictional nonbiological characters today, then that is how we will treat real-life intelligences in the future that don’t happen to have a biological substrate.

If you do accept the leap of faith that a nonbiological entity that is convincing in its reactions to qualia is actually conscious, then consider what that implies: namely that consciousness is an emergent property of the overall pattern of an entity, not the substrate it runs on.

There is a conceptual gap between science, which stands for objective measurement and the conclusions we can draw thereby, and consciousness, which is a synonym for subjective experience. We obviously cannot simply ask an entity in question, “Are you conscious?” If we look inside its “head,” biological or otherwise, to ascertain that, then we would have to make philosophical assumptions in determining what it is that we are looking for. The question as to whether or not an entity is conscious is therefore not a scientific one. Based on this, some observers go on to question whether consciousness itself has any basis in reality. English writer and philosopher Susan Blackmore (born in 1951) speaks of the “grand illusion of consciousness.” She acknowledges the reality of the meme (idea) of consciousness—in other words, consciousness certainly exists as an idea, and there are a great many neocortical structures that deal with the idea, not to mention words that have been spoken and written about it. But it is not clear that it refers to something real. Blackburn goes on to explain that she is not necessarily denying the reality of consciousness, but rather attempting to articulate the sorts of dilemmas we encounter when we try to pin down the concept. As British psychologist and writer Stuart Sutherland (1927–1998) wrote in the International Dictionary of Psychology, “Consciousness is a fascinating but elusive phenomenon; it is impossible to specify what it is, what it does, or why it evolved.”4

However, we would be well advised not to dismiss the concept too easily as just a polite debate between philosophers—which, incidentally, dates back two thousand years to the Platonic dialogues. The idea of consciousness underlies our moral system, and our legal system in turn is loosely built on those moral beliefs. If a person extinguishes someone’s consciousness, as in the act of murder, we consider that to be immoral, and with some exceptions, a high crime. Those exceptions are also relevant to consciousness, in that we might authorize police or military forces to kill certain conscious people to protect a greater number of other conscious people. We can debate the merits of particular exceptions, but the underlying principle holds true.

Assaulting someone and causing her to experience suffering is also generally considered immoral and illegal. If I destroy my property, it is probably acceptable. If I destroy your property without your permission, it is probably not acceptable, but not because I am causing suffering to your property, but rather to you as the owner of the property. On the other hand, if my property includes a conscious being such as an animal, then I as the owner of that animal do not necessarily have free moral or legal rein to do with it as I wish—there are, for example, laws against animal cruelty.

Because a great deal of our moral and legal system is based on protecting the existence of and preventing the unnecessary suffering of conscious entities, in order to make responsible judgments we need to answer the question as to who is conscious. That question is therefore not simply a matter for intellectual debate, as is evident in the controversy surrounding an issue like abortion. I should point out that the abortion issue can go somewhat beyond the issue of consciousness, as pro-life proponents argue that the potential for an embryo to ultimately become a conscious person is sufficient reason for it to be awarded protection, just as someone in a coma deserves that right. But fundamentally the issue is a debate about when a fetus becomes conscious.

Perceptions of consciousness also often affect our judgments in controversial areas. Looking at the abortion issue again, many people make a distinction between a measure like the morning-after pill, which prevents the implantation of an embryo in the uterus in the first days of pregnancy, and a late-stage abortion. The difference has to do with the likelihood that the late-stage fetus is conscious. It is difficult to maintain that a few-days-old embryo is conscious unless one takes a panprotopsychist position, but even in these terms it would rank below the simplest animal in terms of consciousness. Similarly, we have very different reactions to the maltreatment of great apes versus, say, insects. No one worries too much today about causing pain and suffering to our computer software (although we do comment extensively on the ability of software to cause us suffering), but when future software has the intellectual, emotional, and moral intelligence of biological humans, this will become a genuine concern.

Thus my position is that I will accept nonbiological entities that are fully convincing in their emotional reactions to be conscious persons, and my prediction is that the consensus in society will accept them as well. Note that this definition extends beyond entities that can pass the Turing test, which requires mastery of human language. The latter are sufficiently humanlike that I would include them, and I believe that most of society will as well, but I also include entities that evidence humanlike emotional reactions but may not be able to pass the Turing test—for example, young children.

Does this resolve the philosophical question of who is conscious, at least for myself and others who accept this particular leap of faith? The answer is: not quite. We’ve only covered one case, which is that of entities that act in a humanlike way. Even though we are discussing future entities that are not biological, we are talking about entities that demonstrate convincing humanlike reactions, so this position is still human-centric. But what about more alien forms of intelligence that are not humanlike? We can imagine intelligences that are as complex as or perhaps vastly more complex and intricate than human brains, but that have completely different emotions and motivations. How do we decide whether or not they are conscious?

We can start by considering creatures in the biological world that have brains comparable to those of humans yet evince very different sorts of behaviors. British philosopher David Cockburn (born in 1949) writes about viewing a video of a giant squid that was under attack (or at least it thought it was—Cockburn hypothesized that it might have been afraid of the human with the video camera). The squid shuddered and cowered, and Cockburn writes, “It responded in a way which struck me immediately and powerfully as one of fear. Part of what was striking in this sequence was the way in which it was possible to see in the behavior of a creature physically so very different from human beings an emotion which was so unambiguously and specifically one of fear.”5 He concludes that the animal was feeling that emotion and he articulates the belief that most other people viewing that film would come to the same conclusion. If we accept Cockburn’s description and conclusion, then we would have to add giant squids to our list of conscious entities. However, this has not gotten us very far either, because it is still based on our empathetic reaction to an emotion that we recognize in ourselves. It is still a self-centric or human-centric perspective.