Выбрать главу

Let us unmask the horror. The face revealed: the why-robots question. Aha! We have faced you before, foul query, back in Chapter 1. But we are different now—stronger. We have data. We can wrestle with you once more, emboldened now by our experience and knowledge. And we know much.

We know that the process that we’ve dubbed “evolutionary biorobotics” works. We know that we can design and build autonomous Evolvabots that represent and hence model extinct and living animals. We can let a population of Evolvabots loose in a simplified world, and that population will evolve under the combined effects of history, randomness, and selection. We know that we can use Evolvabots to test hypotheses about the evolution of the traits of early vertebrates. And we know that by virtue of their explicit simplicity, Evolvabots allow us to witness, interpret, and understand puzzling evolutionary patterns.

But is that enough? Wouldn’t we learn the same thing—and learn it faster—from digital robots? No, no, no, whispering ghost, leave us be! The problem and the difference is physical. We don’t simulate the physical world—we live it. Remember Chapter 1? I’ll recap the reasons for building physically embodied robots and add, in italics, what we’ve learned since we first set eyes on this list.

With physically embodied robots built to model animals (what Webb calls biorobots):

* You can’t violate the laws of physics… because the robots are enacting, not modeling, the laws of physics.

* You can build a simplified version of an animal… using the KISS principle, the engineer’s secret code, and Webb’s modeling dimensions as guidelines.

* You can change the size of the animal… to suit the needs of your experiment or match the physical situation of the targeted system.

* You can isolate and change single parts, keeping all else constant… giving you a decent chance to understand the behavioral complexities that even simple agents produce.

* You can reconstruct extinct animals and some of their behaviors… if you know enough about the anatomy and physiology of the targets and the environments in which they lived.

* You can create animal behavior from the interaction of the agent and the world… without needing to code “behavior” into the “brain” because behavior is the dynamic spatiotemporal event that occurs when an autonomous agent operates in an ongoing perception-action feedback loop with its world.

* You can test hypotheses about how animals function in terms of biomechanics, behavior, and evolution… if and only if your embodied robot is carefully designed to represent explicit features of your biological system.

Phew. One thing that we’ve learned for sure: verbosity. More importantly: a biorobot that is embodied and situated is a physically instantiated simulation, a representation of a biological target, a model. But that’s not all. An embodied biorobot is also a physical thing in and of itself. You can’t take that away from me, or the robot. Even if someone, like one of our intellectual predators from Chapter 6, decides that your Evolvabot is a horrible model of an evolving fish, that Evolvabot is still, undeniably, a physical, material entity. Looks like a material entity. Feels like a material entity. Tastes like a material entity. Anyone for a bowl of material entity? Yes, please. I never eat anything but. Yum.

It’s at this stage that folks like me, working with physically embodied robots, like to claim that the digitally simulated robots, the binary things on the computer, aren’t real. My robots are real. It’s those digital simulations that aren’t. I’m okay, you’re a fake.

However, I’m not going to say that, even though I just did (but I didn’t mean it—paradox alert!). What I’m going to say instead, because it’s a more accurate reflection of reality, is that digital simulations do indeed have a physical reality: electrons, within silicon dioxide microcircuits, by virtue of their controlled movements, carry out a series of Boolean logic functions that, in aggregate, represent the manipulation of symbols defined by a human as part of an algorithm. Those electrons interact with a world of other electrons as well as the constraints and channels of their semiconductive silicon environment. The electrons are not spirits in the material world. They have mass, charge, and velocity. They behave in the same way that embodied robots do—governed by the laws of physics—when they interact with the world. So to say that those electrons aren’t real and can’t behave is false.

Then why all the fuss? What’s the difference between an embodied robot as a model simulation and a digital robot as a model simulation? Barbara Webb, the creator of the field of biorobotics (see Chapter 1), makes the distinction between modeling in software and hardware: “The most distinctive feature of the biorobotics approach is the use of hardware to model biological mechanisms.”[147] Webb elaborates: “a more fundamental argument for using physical models is that an essential part of the problem of understanding behaviour is understanding the environmental conditions under which it must be performed.”[148]

Now we are onto something. The difference is not physical simulation versus nonphysical simulation. It’s not materialism versus substance dualism (see Chapter 5). The difference is how we model the behavior. Do we create the behavior by representing the interactions of agent and environment algorithmically, mathematically? Or do we create the behavior by not representing the interactions at all but instead letting them “just happen”? When behavior just happens, we remove a layer of the simulation, a layer of representation that, when present, increases the conceptual distance between the target and the model (Figure 7.3).

Let me put it this way: if behavior is the physical interaction of—or feedback between, if you prefer—a physical agent and a physical world, then that behavior can be modeled in mathematical representations or not modeled. This makes me think that Rodney Brooks was pulling our collective leg when he said, “The world is its own best model.” Here’s the paradox: the world is not a model; it is simply the world itself. We only make the world into a model when we force it to represent something else. This, then, is the Zen of Physically Embodied Biorobots.

FIGURE 7.3. Things-in-the-world and their representations. Each thing-in-the-world can, if carefully designed by a human, represent other things-in-the-world. The great power of software is that it can represent any thing-in-the-world, even representations of things-in-the-world. The same can’t be said for fish: you would never argue, I hope, that fish-in-the-world represent software-in-the-world. But because representation depends on the intent of the human experimenter, folks can and do argue that they can use fish-in-the-world to represent extinct-fish-in-the-world (primary representation, 1°). We built Tadro3s to represent the tadpole larvae of tunicates that, in turn, we selected to represent early chordate ancestors of vertebrates (secondary representation, 2°, of chordate ancestors by Tadro3s). When we created digi-Tad3s as representations of Tadro3, we created an additional layer of representational distance from the target (tertiary representation, 3°, of chordate ancestors by digi-Tad3s).

вернуться

147

Barbara Webb, “Can Robots Make Good Models of Behaviour?” Behavioral and Brain Sciences 24, no. 6 (2001): 1048.

вернуться

148

Ibid., 1049.