Department of Engineering Science, Osaka University, Toyonaka, Osaka, Japan.
JST ERATO, Chiyoda-ku, Tokyo, Japan.
PLoS One. 2021 Aug 10;16(8):e0254905. doi: 10.1371/journal.pone.0254905. eCollection 2021.
Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activation patterns for each emotional expression and people can perceive these emotions by reading these patterns. However, recent research on human behavior reveals that some emotional expressions, such as the emotion "intense", are difficult to judge as positive or negative by just looking at the facial expression alone. Nevertheless, it has not been investigated whether robots can also express ambiguous facial expressions with no clear valence and whether the addition of body expressions can make the facial valence clearer to humans. This paper shows that an ambiguous facial expression of an android can be perceived more clearly by viewers when body postures and movements are added. We conducted three experiments and online surveys among North American residents with 94, 114 and 114 participants, respectively. In Experiment 1, by calculating the entropy, we found that the facial expression "intense" was difficult to judge as positive or negative when they were only shown the facial expression. In Experiments 2 and 3, by analyzing ANOVA, we confirmed that participants were better at judging the facial valence when they were shown the whole body of the android, even though the facial expression was the same as in Experiment 1. These results suggest that facial and body expressions by robots should be designed jointly to achieve better communication with humans. In order to achieve smoother cooperative human-robot interaction, such as education by robots, emotion expressions conveyed through a combination of both the face and the body of the robot is necessary to convey the robot's intentions or desires to humans.
通过多种方式表达情感不仅是人类的重要功能,也是机器人的重要功能。将面部表情映射到基本情绪的映射方法广泛应用于机器人情感表达的研究。这种方法声称,每种情绪表达都有特定的面部肌肉激活模式,人们可以通过读取这些模式来感知这些情绪。然而,最近对人类行为的研究表明,有些情绪表达,如“强烈”,仅凭面部表情很难判断是积极的还是消极的。然而,尚未研究机器人是否也可以表达没有明确极性的模糊面部表情,以及添加身体表情是否可以使人类更清楚地感知到面部极性。本文表明,当添加身体姿势和动作时,观众可以更清楚地感知到机器人的模糊面部表情。我们进行了三项实验和在线调查,分别有 94、114 和 114 名北美居民参与。在实验 1 中,通过计算熵,我们发现当只显示面部表情时,“强烈”的面部表情很难判断为积极或消极。在实验 2 和实验 3 中,通过分析方差分析,我们确认当参与者看到机器人的整个身体时,他们更擅长判断面部极性,即使面部表情与实验 1 相同。这些结果表明,机器人的面部和身体表情应该联合设计,以实现与人类更好的沟通。为了实现更顺畅的人机协作,例如机器人教育,通过机器人的面部和身体结合传达情感表达对于向人类传达机器人的意图或愿望是必要的。