Sato Wataru, Shimokawa Koh, Minato Takashi
Psychological Process Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan.
Interactive Robot Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika- cho, Soraku-gun, Kyoto, 619-0288, Japan.
Sci Rep. 2025 Jul 17;15(1):25986. doi: 10.1038/s41598-025-11745-w.
Multimodal emotional expressions play an essential role in real-life communication. Mehrabian and colleagues suggested that facial expressions may have the greatest emotional impact, followed by vocal and verbal expressions. However, no study has examined all three modalities in face-to-face situations in a single experiment, possibly due to limitations in human acting. We postulated that an android could be a useful solution to this problem. In this study, the android Nikola systematically changed its facial, vocal, and verbal expressions of negative, neutral, and positive emotions in a face-to-face situation. Participants rated the emotional valence of the expressions. The modalities were ranked from the greatest to the least emotional impact, as follows: facial expressions, then vocal expressions, and finally verbal expressions. Additional experiments with human raters and ChatGPT showed comparable emotional valence for facial, vocal, and verbal expressions presented unimodally. The results provide the first evidence validating Mehrabian's model, demonstrating the importance of facial or nonverbal expressions in face-to-face emotional communication.
多模态情感表达在现实生活交流中起着至关重要的作用。梅拉比安及其同事认为,面部表情可能具有最大的情感影响力,其次是声音和语言表达。然而,尚无研究在单一实验中考察面对面情境下的所有这三种模态,这可能是由于人类表演存在局限性。我们推测,一个机器人可能是解决这个问题的有效办法。在本研究中,机器人尼古拉在面对面情境下系统地改变其负面、中性和正面情绪的面部、声音和语言表达。参与者对面部表情的情感效价进行评分。这些模态的情感影响力从大到小依次为:面部表情,其次是声音表达,最后是语言表达。使用人类评分者和ChatGPT进行的额外实验表明,单模态呈现的面部、声音和语言表达具有相当的情感效价。研究结果首次为验证梅拉比安的模型提供了证据,证明了面部或非语言表达到面对面情感交流中的重要性。