Takayuki Kanda and his fellow researchers have discovered a number of interesting things about the design of affective robotics technology. Foremost is the data they have gathered that suggests that both adults and children are willing to suspend disbelief and attribute real intelligence and friendly feelings towards these machines even at the modest level of behaviors that are possible with the technology of today (Kanda and Ishiguro, 2005; Kanda et al., 2004). They have also found that the appearance of the robot is important and that reactions to the robot change when they alter its outward appearance, even when the underlying programmed behaviors remain the same (Kanda et al., 2004).
Research in the social psychology of human robot interactions such as what we looked at in the last section have inspired other roboticists to attempt to harness the natural psychological tendencies of humans in the design of affective robots. Since it seems that we all tend to anthropomorphize objects in our environment, this fact can make the design of affective robots much easier to accomplish. For instance, Daniel Dennett has written persuasively on “as-if” intentionality, where we often find it expedient to treat certain things we are interacting with as-if they had real intentionality (Dennett, 1996). This trend also seems to extend to the emotional realm. When dealing with affective robots, people seem willing to treat the robot as-if it really did have some fondness for them even if the engineers that built the machine would never be willing to ascribe these emotions to the machine since they know the synthetic tricks they used to simulate the emotions in the machine.
We might want to push this idea philosophically and wonder if once we have a complete understanding of neuroscience, our so called ‘real’ emotions might not turn out to be of the as-if variety Dennett describes. But let us leave that to another day. What is important to our discussion of affective robotic design is that this trick does work and should be used in designing these machines. Still, it is important not to push this psychological tendency too far. Humans are willing to ascribe abilities to machines that the machines do not have, but only to a point. Brian Duffy of the MIT Media Lab Europe reminds us that we need not attempt to build ersatz humans that will be ultimately unconvincing, but that instead we need to balance the robots, “. . . anthropomorphic qualities for bootstrapping and their inherent advantage as machines, rather than seeing this as a disadvantage, that will lead to their success” (Duffy, 2003). In other words, successful affective robots will be machines that are designed to do what machines do best, but in a way that engages the users’ natural anthropomorphizing tendencies to help embed that machine in the user’s lifeworld. This means that affective robots are best when they elicit our natural human predispositions to grant personalities to the objects around us making it easier for us to interact with the technology.
The roboticist Mashahiro Mori describes an interesting psychological barrier that roboticist must contend with, which he calls the “uncanny valley” (Mori, 1970). The uncanny valley is found by graphing the level of human likeness with familiarity, as a machine becomes more similar to humans in likeness and function it will evoke more positive feelings of familiarity. But Mori claims that after a certain point the machine will be more like a human in likeness and function but this likeness will be seen as uncanny and not desirable until the machine reaches a very high level of human likeness where he posits that the feelings of familiarity will rise again amongst the humans interacting with the machine, the uncanny valley is the area of unfamiliarity between the first and second peak of positive feelings of familiarity (Mori, 1970). Mori suggests that it is best for roboticist to design robots in such a way that they sit firmly on the first peak before the uncanny valley; they should be human like in some ways but clearly machines in others. This way they are not threatening and people will happily interact with them. This is a sound design principle if we are to build machines that enhance the human lifeworld rather than disrupt it. In the following sections we will look at some examples of how roboticists in Japan, Europe, and the United States, are thinking about ways to design affective robotics that take into account the ideas and concepts we have discussed above.
Ever since the post war period in Japan, the humanoid robot has been a staple of toy design and the television and movie entertainment industry. Characters such as the friendly, loyal, and heroic little robot boy Tesuwan Atom, (or Astro Boy as he is marketed to the West), who was introduced to the world in a popular anime series begun in 1963, have helped to put a pleasant and obliging face on robotics technology. This interpretation of the robot is quite a bit different from the slave-master paradigm of robots typical of Western science fiction, which from the first mention of robots in the Play R.U.R. to the latest block buster movies have seen robots as menial labors that will eventually rise up to punish their tyrannical human masters. Of course this darker concept of robotics can be found in some Asian science fiction stories and the friendly robot is not absent from the West but overall there is a noticeable trend to be found here.
This friendly take on robotics technology might be based on the vastly different relationship towards technology that distinguishes Japanese culture from that of the West. One theory is that since traditional Japanese culture believes that every thing has a spiritual essence, including nonliving items, so they are more likely to be unbothered by positing some sort of real lifelikeness to machines, a prospect that we in the West find philosophically uncomfortable (Kaheyama, 2004; Perkowitz, 2004). The West, deeply influenced by the materialism/dualism debate, has more trouble with the concept of having an emotional relationship with a machine. The metaphysics of Buddhism also allows for an entirely different relationship to robots then that of the Abrahamic religions of the West and Middle East. Whereas orthodox Christians, Muslims, and Jews might see building a robot as some sort of perverse sub-creation or ultimate graven image, Buddhism allows the machine to share in the buddha-nature of its creator, or so argues the roboticist and Buddhist scholar Masahiro Mori in his book, The Buddha in the Robot: A Robot Engineer’s Thoughts on Science and Religion: