Where does that leave the rest of the world? Are nonhuman primates intelligent? Is your dog intelligent? How ‘bout your African Gray parrot? If you answer either “yes” or “no,” then answer this: how do you know?[49] One answer, given by Alan Turing, is that you know by interacting.[50] He asks, “Can machines think?” In terms we’ve been using, Turing comes up with a way to answer this question—with the assumption that thinking is proof of intelligence—by creating an environment that includes at least two agents, each being a part of the other’s environment.
For Turing, the interaction between agents and their environments is conducted using language: agents, hidden from each other, converse via keyboards. The brilliance of this conversational environment is that it is dynamic and far-reaching: linguistic communication in real time is a back and forth that can be about anything. If the machine, in practice an artificial intelligence (AI) in the form of a computer program and its hardware, can fool its human interactant into thinking that the human is conversing with another human, then the AI is said to have “passed” what we have come to call the “Turing Test.” The Loebner Prize Competition is a Turing Test held every year as an international competition.[51] A bronze medal, along with a cash prize, is awarded to the AI that fools the most human judges. A gold medal awaits the AI that is indistinguishable from a human. We are still waiting for an AI to claim that prize.
An even tougher test of human-level intelligence is what Stevan Harnad calls “the total Turing Test.”[52] In the total Turing Test (T3) the AI has to be embodied and physically present in an environment shared with the human interrogators. In other words, the AI has to be an embodied robot, and human-level intelligence is only achievable with a body and a brain. The embodied robot must be able to physically perform, in all ways, as an indistinguishable member of the group of organic agents to pass the T3. You can see that the T3 is a tall order, especially if you think of humans and the human interactional environment: language, movement, and physical appearance—all have to be on the mark, like a teenager struggling to fit in. In the human arena robots are not even close to competing at the bronze-level equivalent of the T3. At the moment the T3 for humans is the stuff of science fiction, like the replicants in the movie Blade Runner.[53]
In opposition to the interaction-based Turing Test, John Searle takes a different approach to the search for intelligence.[54] He looks for systems that understand what they are thinking about. For example, you know that you are thinking right now because you can use the symbols of written or spoken language to talk to yourself or to others about your thinking. You understand that you are “expressing” yourself. Your subjective first-person experience as the agent doing the expressing allows you to know that the word symbols you manipulate in your speech or writing contain meaning. You have the ability to analyze your own mental states, and by so doing, you are aware of your own intelligence and the processes that underwrite that intelligence. You can verify that your linguistic symbols have meaning to you.
This ability to be aware of ourselves analyzing ourselves is why human scientists get excited when a dolphin recognizes itself in a mirror.[55] Diana Reiss and Lori Marino, the researchers who’ve done this work, show behavioral evidence of the dolphin’s self-awareness. The dolphin looks in the mirror, sees a spot of investigator-applied zinc oxide on the dolphin in the mirror, and then proceeds to spend time turning its body to examine the body of the dolphin-in-the-mirror for other blemishes. Reiss and Marino interpret this behavior as showing that the dolphin understands that the image in the mirror is representing “self” and not “other.” Pretty cool.
Distinguishing between yourself-as-an-agent and others-as-agents is the basis of inferring that other agents may be intelligent. We have the ability to make this distinction, and we use that ability to infer that other conscious, human agents possess the same ability. We can report those subjective experiences to others using language: I know that I’m intelligent; I know that you are a human agent like me; therefore, I infer that you are intelligent like me.
When we search for intelligent life, we combine the approaches of Searle and Turing. First, I understand that I’m intelligent because I’m the “I” experiencing my intelligence (Searle’s criterion). Second, I’m guessing that you are intelligent because when we interact you behave in ways that make me think that the only way we can be having an interaction like this is if you have an intelligence very much like my own (Turing’s test).
Most people, when asked by Daniel Wegner and his colleagues at the Mental Control Laboratory at Harvard, say that other human beings have features that we associate with an intelligent mind: consciousness, personality, feelings, emotions, rights, responsibilities, self-control, planning, thought, and recognition of emotion in others.[56] A surprise is that these same humans perceive that some of these mind-like features are possessed, to varying degrees, by entities that include the nonliving, such as God and robots. If it’s fair for me to perform the sleight of hand that equates mind-likeness and intelligence, then we twenty-first-century humans readily perceive intelligence all over the place. Perception, however, is not necessarily reality.
When Adam Lammert and I showed my colleague Ken Livingston the first working Tadro, Tadro1, we were excited and a bit nervous. Livingston, professor of psychology and one of the founders of the Cognitive Science Program at Vassar,[57] had served for both of us as a mentor in the ways of embodied robotics and artificial intelligence. When he saw Tadro swimming around in a big sink in the lab, following the beam of a flashlight we were moving around, he grinned and said, “Tadros are a piece of embodied intelligence.” Would you agree?[58] Adam and I did. Here’s why.
Putting Turing’s hat back on, let’s think about what we were doing with Tadro. We put a Tadro in the sink, turned off the lights, and then turned on a flashlight. The Tadro, which had been aimlessly swimming around the tank, changed course with what looked like purpose and curved in a right-handed loop toward us, bumping the wall of the tank, turning around to the left, and then heading back in our direction. We then played a trick: lights off. Because we’ve put green and red navigation lights on Tadro, we could see Tadro in the dark as it changed the curvature of its heading, moving now in a left-handed arc along the wall of the tank. We snuck around to where Tadro was headed, and surprise! We turned on the flashlight directly over Tadro’s head. Tadro’s response was immediate: a quick turn to the right, moving off the wall, and heading back into the darkness.
Okay, so is this the most fun we’ve ever had? Nope, but it beats washing the dishes. When you play around with Tadro, you experience a sense that you have to learn, through your interactions, about what’s not predictable and what is. You can’t predict exactly what Tadro will do, how much it will turn, where it will hit the wall of the sink. At the same time, you learn very quickly that in response to light on its single eyespot, Tadro turns to the right. When that eyespot is in darkness, Tadro turns to the left. You even figure out that you can interact with Tadro in such a way as to get it to swim straight for just a bit when you find just the right light intensity that is midway between full dark and full light.
49
This ability to “know” or infer if any other agent, organic or artificial, possesses a mind is an exciting area of philosophical and scientific work that’s usually called, “The Problem of Other Minds.” I realize that I’m conflating “mind” and “intelligence” here. The two are often treated as interchangeable: a human mind is considered, by definition, intelligent; thus, so it goes for some, intelligence is found only in human minds.
50
Interaction of a human and a potential artificial intelligence is the basis of what Turing called the “imitation game.” We now call this the “Turing Test.” Read all about it in his wonderfully accessible paper: Alan Turing, “Computing Machinery and Intelligence,”
51
Here’s the official site of the Loebner Prize: www.loebner.net/Prizef/loebnerprize.html.
52
Stevan Harnad, “The Turing Test Is Not a Trick: Turing Indistinguishability Is a Scientific Criterion,”
53
Impossible as the T3 may seem to us as we contemplate human-level performance, I argue that the T3 has been passed, perhaps even at the level of the Loebner Prize gold medal, for a different species: cockroaches. Autonomous cockroach robots fooled real cockroaches so well that they could cause the real cockroaches to do things they didn’t normally do, like form groups in the light (they prefer the dark). Here’s the brilliant paper: J. Halloy et al., “Social Integration of Robots into Groups of Cockroaches to Control Self-Organized Choices,”
54
This paper contains an excellent description of Searle’s classic “Chinese room” thought experiment: John Searle “Is the Brain’s Mind a Computer Program?”
55
Experimental evidence for self-recognition in dolphins can be found in this paper: D. Reise and L. Marino, “Mirror Self-Recognition in the Bottle-Nose Dolphin: A Case of Cognitive Convergence,”
56
H. M. Gray, K. Gray, and D. M. Wegner, “Dimensions of Mind Perception,”
57
In 1982 Vassar College became the first institution in the world to offer an undergraduate major in cognitive science. Hampshire College disputes Vassar’s claim of primacy (see the claim on the website of their School of Cognitive Science at www.hampshire.edu/cs/).
58
A terrific exploration of embodied intelligence is Louise Barrett’s