Inspired by what invertebrates could do without much in the way of a brain, Brooks and his colleagues programmed the computers inside mobile robots with a parallel of arrays of what most of us would call reflexes. In a reflex, a simple stimulus, like intense heat on the palm of your hand, causes an immediate response: flex the joints of your arm. When the joints flex, your hand moves toward your body and usually away from the heat source. In this sense Tadro3 also works by a kind of reflex, one that is ongoing and gradual rather than working from an on-off switch.
Brooks reasoned—and then demonstrated in the mid-1980s—that robots could use a storehouse of reflexes to do what the brain-based, cognition-box robots of the day could not: navigate in a changing environment. Brooks’s autonomous six-legged robot, Genghis, could walk over rough terrain and follow a human.[97] At the time Genghis was a breakthrough in the true sense of the word, the existence proof for what has become the field of behavior-based robotics.[98]
Behavior-based robotics uses the synthetic method to build up from the basics. We’ve encountered this when we spoke of Braitenberg’s “downhill invention” approach to understanding behavior. The synthetic approach also works hand in hand with the KISS principle because the whole idea is that the building blocks, like the reflex modules, are simple constructs, as with a stimulus linked directly to a response. The synthetic approach also works with our secret engineers’ code because we can understand the simple elements and then, piece by piece, put them together to build other now-more-complex systems that we can still understand.
When you start to synthesize the nervous system of an autonomous agent out of reflex modules, you run into an immediate problem: how do you coordinate those modules? If each reflex module automatically creates a behavior when its stimulus switch is flipped on, then what happens if two behavior modules get flipped on at the same time? Or what happens if behaviors are stimulated in sequence, one after the other, and their automatic actions overlap in time? This kind of conflict between automatic controls needs to be settled by an arbiter, a system that decides which module gets the green light. Brooks, again inspired by animals, created an arbitration scheme that he called “subsumption.” In a subsumption-style neural architecture, the robot’s programmer ranks the behavior modules. In case of conflict, the behavior module with the higher rank “subsumes,” or suppresses, the behavior module with the lower rank.[99] Once programmed, subsumption is a built-in decision arbiter. You, the autonomous agent, don’t need to consider what to do next; you simply do the lowest-level behavior as the default until you are stimulated to do something else. I’ve tried to program myself to operate with subsumption when I drive. At the bottom of the hierarchy, my default layer is a behavior I call “drive efficiently.” This module is actually a collection of submodules that include behaviors like: avoid sudden acceleration, adjust speed to avoid red lights, and choose un-congested routes. At the top of my two-level subsumption hierarchy is “drive safely.” This module is a coordinated set of modules straight out of driver training class: stay on the road, keep a safe following distance, don’t hit the car in front of me, and scan ahead for possible problems. In practice, the “drive safely” behavior overrides “drive efficiently” most of the time because the presence of other cars or challenging driving conditions like rain, darkness, or unfamiliar roads stimulates the module.
With subsumption in mind, I enjoy trying to analyze the driving behavior of other humans. At the lowest level many drivers appear to have the behavior, “drive like hell.” This default appears to involve a collection of submodules that includes behaviors like: pass or tailgate any car in front of you; switch lanes rapidly and, if necessary, without signaling; prevent other cars from passing you; accelerate quickly from a stop. At high-stimulus thresholds, in most drivers the “drive safely” module appears to override “drive like hell.”
Any agent who can do the behavior arbitration dance with subsumption must have some familiar accoutrements: a body, a body with sensors, a body with actuators, and a body operating in the real world. Brooks sums up these requirements as follows: an autonomous agent must be embodied and situated. An embodied agent reacts to events in the world by virtue of having a physical body; this is the “body computation” that we’ve talked about before. A situated agent reacts to events in the real world by virtue of having senses; this is the basis of the “neural computation” that we invoke the minute we put together a circuit diagram of a nervous system.
As you’ve probably figured out, Tadro3 lacks a subsumption-style nervous system. Tadro3’s decisions are all ongoing, continuous adjustments of the turning angle of the tail. The resulting light-seeking behavior gives Tadro3 the know-how to detect light, move toward it, and then orbit around the spot of highest intensity.
Sadly, even though Tadro3 can evolve better feeding behavior by evolving its body, it lacks the genetic wherewithal to evolve different skills. For example, if danger is lurking, Tadro3 has no way of knowing: it just senses the intensity of light through its single eyespot. See no evil, hear no evil. Such no-know-how is a good way for an organic agent, like the tunicate tadpole larva after which Tadro3 is modeled, to become lunch in the game of life.
Predation is thought to be one of the strongest selection pressures in living fishes, as we talked about at the end of Chapter 4. Applying a strong and ecologically relevant selection pressure like predation thus seems like a great way to get back to where we started: trying to understand what drove the evolution of vertebrae in the first fish-like vertebrates.
If predation is the hypothesized selection pressure, then, for the reasons just mentioned, Tadro3 can’t do the job. It’s not built to be prey. Instead, we need to upgrade to a Tadro that has both the nervous system and the body to eat and to avoid being eaten. Tadro4 is up to the task, and I’ll explain its design using the ideas we’ve developed in this chapter on embodied intelligence.
We designed Tadro4 to do what living fish (but not tunicate tadpole larvae) do. Tadro4 swims around with two eyes (photoresistors) foraging for food. When and if a predator approaches, Tadro4 detects the predator using an infrared proximity detector, which is the functional equivalent of a lateral line—an array of tiny hairs and cells running along the length of the fish’s body that move when water is displaced by the fish itself or by something nearby moving.[100] When a detector on either side of the body is triggered, Tadro4 tries to escape. This switch in behavior, from feeding to fleeing, is accomplished by a nervous system that is a two-layer subsumption hierarchy (Figure 5.8).
What’s really cool about this two-layer subsumption design is that it very closely resembles, at a functional level (think: functionalism), how the nervous system of fish actually operates. Most fish find food by foraging—swimming around and searching for chow. This is layer 1, the default behavior, the kind of behavior that Tadro3 performed when we pretended that the light was a food source. In addition fish also are able to detect predators, and if a predator strikes, the would-be prey hits the neural panic button and performs what we creative biologists call a “fast start.” This is layer 2, the behavior ranked higher in terms of importance that layer 1.
97
For more on Brooks’s Ghenghis, including its ancestors and descendants, dig around at the website of the MIT Computer Science and Artificial Intelligence Laboratory: www.csail.mit.edu/.
98
Behavior-based robotics, as a field, was codified by Professor Ronald Arkin in his seminal textbook,
99
Rodney Brooks, “A Robust Layered Control System for a Mobile Robot,” A.I. Memo 864, MIT Artificial Intelligence Laboratory, 1985. Published as R. Brooks, “A Robust Layered Control System for a Mobile Robot,”
100
Matt McHenry, whom you may remember as one of the inventors of Tadro1, and his PhD student, William Stewart, combined experiments on and models of zebrafish to look at the possible influence of a predator on flow around a prey: W. J. Stewart, and M. J. McHenry, “Sensing the Strike of a Predatory Fish Depends on the Specific Gravity of a Prey Fish,”