RIKEN Information R&D and Strategy Headquarters, Guardian Robot Project, Kyoto, Japan.
School of Psychology, Cardiff University, Cardiff, UK.
Sci Rep. 2023 Oct 7;13(1):16952. doi: 10.1038/s41598-023-44140-4.
Humanlike androids can function as social agents in social situations and in experimental research. While some androids can imitate facial emotion expressions, it is unclear whether their expressions tap the same processing mechanisms utilized in human expression processing, for example configural processing. In this study, the effects of global inversion and asynchrony between facial features as configuration manipulations were compared in android and human dynamic emotion expressions. Seventy-five participants rated (1) angry and happy emotion recognition and (2) arousal and valence ratings of upright or inverted, synchronous or asynchronous, android or human agent dynamic emotion expressions. Asynchrony in dynamic expressions significantly decreased all ratings (except valence in angry expressions) in all human expressions, but did not affect android expressions. Inversion did not affect any measures regardless of agent type. These results suggest that dynamic facial expressions are processed in a synchrony-based configural manner for humans, but not for androids.
类人机器人可以在社交场合和实验研究中充当社交代理。虽然有些机器人可以模仿面部表情,但不清楚它们的表情是否利用了人类表情处理中相同的处理机制,例如整体加工。在这项研究中,比较了类人和人类动态表情中面部特征整体倒置和不同步作为配置操作的影响。75 名参与者对(1)愤怒和快乐情绪识别和(2)直立或倒置、同步或不同步、类人或人类代理动态表情的唤醒和效价评分进行了评定。动态表情的不同步显著降低了所有人表情的所有评分(愤怒表情的效价除外),但对类人机器人表情没有影响。无论代理类型如何,反转都不会影响任何指标。这些结果表明,人类的动态面部表情是基于同步的整体方式进行处理的,而机器人则不是。