Выбрать главу

Taking the lessons learned from Kismet the lab is now working with Hollywood special effects wizards from Stan Winston Studios to create Leonardo the next level in sociable robots. Where Kismet clearly looked like a robot Leonardo does a better job of hiding the fact and looks like a strange yet cute mammalian creature straight out of a movie. Leonardo is controlled by animatronics, but what separates it from mere expensive puppets is that its movements are completely controlled by a computer and it is programmed to react and interact with humans as humans. Leonardo looks at you when you talk to it, tries to infer your intention by your body movements and gestures, and in return gives you as the user cues on its mood and beliefs through facial expressions and body gestures.

The goal is to make machines that do not require that the user change his or her ways of being in the world and interacting with human and nonhuman agents. Breazeal feels that we have evolved a complex social system that works admirably and roboticists need to learn how to make their machines fit in with our already preexisting ways of interacting rather then foist on us an interface that is alien and hard to use (Breazeal, 2002). This is particularly necessary when dealing with non-technical users, such as users in a home where the machine needs to fit in as a fellow member of the household and not disrupt the lifeworld and practices of its human inhabitants. This constraint means that the robots must match our physiology and be able to understand our emotions wants and needs (Breazeal et al., 2004). If that was achieved the robot might indeed appear to be the perfect companion.

5.2 Design Methodologies for Affective Robots at MIT and Media Lab Europe

Brian Duffy from Media Lab Europe has written out a list of design methodologies that he suggests would employ anthropomorphism in successful social robotic design (Duffy, 2003).

- Use social communication conventions in function and form. For example, a robot with a face that has expressions is easier to communicate with than a faceless box.

- Avoid the “Uncanny Valley.” Robotics researcher Masahiro Mori argues that if a machine looks too human but lacks important social cues and behaviors it is actually a worse design then a robot with more iconic features who has the same behavior, since users will find the synthetic human uncanny or creepy unless or until it has the capabilities of a fictional robot like Data on Star Trek the Next Generation, who, even so, can be a little weird.

- Use natural motion. The motion needs to be somewhat erratic like a natural being and not perfect, flowing, and alien as is sometimes seen in digital animation.

- Balance form and function. The designer needs to not set up false expectations in the user by making the robot look better than it performs.

- Man vs. Machine. Designers need not feel constrained by making the robot fit the human form. Certainly our social infrastructure makes it important that social robots be about the same size as humans so they can fit through doors, etc., but we need not try to make synthetic humans, robots should be built to augment our abilities not simply to replace us.

- Facilitate the development of a robot’s own identity. The machine needs to participate in human social interaction not just be an object within that social space.

- Emotions. The machine needs artificial emotions to make it more easily understood by non-technical users and to facilitate affective interactions.

- Autonomy. The machine needs to have its own independence and an ability to understand its role in a social context and how to navigate through that milieu (an ability I am sure we all wish we had more of).

Duffy’s list is a great start and nicely condenses a number of the concerns brought up earlier in this chapter. To this list I would like to add some of the design issues mentioned by Cynthia Breazeal in her book, Designing Social Robots, 2002, that are not covered by the list above.

- The robot needs to have homeostatic sense of “well-being” that it can regulate through interactions with its users. It has to know what it wants, and know how to get it.

- The robot needs an appropriate attention system. It has to be able to attend to what is important and ignore what is not given the milieu it is operating in.

- The robot has to be able to give clues about its internal “emotional” state, and it also has to be able to read those off of its human users accurately.

- Learning is important and users have to be confident that the machine will learn from its mistakes.

- Eventually the machines will need, robust personalities, better abilities at discourse, a sense of empathy for their users and other robots, as well as a theory of mind, and an autobiographic memory, but these are very ambitious requirements and may take many decades to achieve.

Taken together these ideas form a concise description of the design philosophy that is being pursued by the most successful practitioners of affective robotics in the United States and Europe. In the concluding section I will offer a critique of what we have learned and offer some ideas meant to enhance the usefulness of affective robotics.

6 Concluding Remarks
6.1 Robots and Phenomenology

Robots are situated at the end of a trajectory of human technology begun with simple human directed hand tools which have evolved over history to the self directed automata that are beginning to emerge today. Robots, as artifacts, are produced out of human desires interacting with technical systems and practices, and as such they shape and are shaped by the human lifeworld that produced them. Robots are objects, but as Carl Mitcham suggests, “[t]echnological objects, however, are not just objects, energy transforming tools and machines, artifacts, with distinctive internal structures, or things made by human beings; they are also objects that influence human experience” (Mitcham, 1994, 176). Robots and humans form a cybernetic system that begins to see humans not specifically directing the behavior of the robotic agents. As machines become more autonomous they become what Mitcham calls, “containers for processes,” meaning that these technologies are not just tools but also encode their own use within their programming, taken together these machines and the technical and human systems they interact with can be described as “objectified processes” (Mitcham, 1994, 168). This means that we have to take seriously precisely what processes we are automating and how we are doing it since robots will have a certain artifactology, meaning that, “.artifacts have consequences; there is considerable disagreement about the character of those consequences and whether they are to be promoted or restrained” (Mitcham, 1994, 182). I will now argue just what kinds of affective robotics systems should be promoted or restrained.

There are a number of possible critiques of personal robotic technology from the perspective of the philosophy of technology and I would like to address what I believe to be the most interesting. When we look at the strategy of building personal robotics systems that work to seamlessly automate the modern household, we can see that the objectified processes are those of the home life. The dream is to remove the workload of running a home from its inhabitants by having that work done by systems that do them for us as unobtrusively as possible, robots that do our laundry, clean, cook etc. Mitcham, inspired by the work of Ivan Illich, argues that instead of tools that do the work for us automatically, perhaps we need more tools that interact with us using our energy and guidance since: