We study sound as a modality through which robots can express emotion, or which can affect people’s perceptions of robots. With the former aim in mind, we are implementing an Emotion Recognition Algorithm onto the robot Haru. It will be tested with both individual and group settings to monitor interaction. The purpose of this research is to use emotion-based data to improve design of “at-home” robots to create a better, more positive and natural interaction between humans and robots.
We are also interested in how the sounds robots make affect people’s perceptions and attitudes toward robots. In one study, we looked at whether the mismatch between the robots’ appearance (android, humanoid, minimalist) and their voice (human or robotic) affected people’s evaluations of the discomfort, competence, eeriness, warmth, humanness, and attractiveness of these robots. This research helps to better understand the ideal types of appearances and sounds that are most receptive to the general populace when people interact with robots.