Suppr超能文献

五个月大婴儿对动态情感刺激的自动面部模仿。

Automatic facial mimicry in response to dynamic emotional stimuli in five-month-old infants.

作者信息

Isomura Tomoko, Nakano Tamami

机构信息

Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka 565-0871, Japan

Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka 565-0871, Japan.

出版信息

Proc Biol Sci. 2016 Dec 14;283(1844). doi: 10.1098/rspb.2016.1948.

Abstract

Human adults automatically mimic others' emotional expressions, which is believed to contribute to sharing emotions with others. Although this behaviour appears fundamental to social reciprocity, little is known about its developmental process. Therefore, we examined whether infants show automatic facial mimicry in response to others' emotional expressions. Facial electromyographic activity over the corrugator supercilii (brow) and zygomaticus major (cheek) of four- to five-month-old infants was measured while they viewed dynamic clips presenting audiovisual, visual and auditory emotions. The audiovisual bimodal emotion stimuli were a display of a laughing/crying facial expression with an emotionally congruent vocalization, whereas the visual/auditory unimodal emotion stimuli displayed those emotional faces/vocalizations paired with a neutral vocalization/face, respectively. Increased activation of the corrugator supercilii muscle in response to audiovisual cries and the zygomaticus major in response to audiovisual laughter were observed between 500 and 1000 ms after stimulus onset, which clearly suggests rapid facial mimicry. By contrast, both visual and auditory unimodal emotion stimuli did not activate the infants' corresponding muscles. These results revealed that automatic facial mimicry is present as early as five months of age, when multimodal emotional information is present.

摘要

成年人会自动模仿他人的情绪表情,人们认为这有助于与他人分享情感。尽管这种行为似乎是社会互惠的基础,但对其发展过程却知之甚少。因此,我们研究了婴儿是否会对他人的情绪表情做出自动的面部模仿。在四到五个月大的婴儿观看呈现视听、视觉和听觉情绪的动态视频片段时,测量他们皱眉肌(眉毛)和颧大肌(脸颊)的面部肌电活动。视听双峰情绪刺激是展示带有情感一致发声的笑/哭面部表情,而视觉/听觉单峰情绪刺激分别展示与中性发声/面部配对的那些情感面部/发声。在刺激开始后500到1000毫秒之间,观察到皱眉肌对视听哭声的激活增加,颧大肌对视听笑声的激活增加,这清楚地表明了快速的面部模仿。相比之下,视觉和听觉单峰情绪刺激均未激活婴儿相应的肌肉。这些结果表明,早在五个月大时,当多模态情感信息出现时,自动面部模仿就已存在。

相似文献

10
Measuring facial mimicry: Affdex vs. EMG.测量面部模仿:Affdex 与 EMG。
PLoS One. 2024 Jan 2;19(1):e0290569. doi: 10.1371/journal.pone.0290569. eCollection 2024.

引用本文的文献

4
Inability to move one's face dampens facial expression perception.无法移动面部会影响对面部表情的感知。
Cortex. 2023 Dec;169:35-49. doi: 10.1016/j.cortex.2023.08.014. Epub 2023 Sep 30.

本文引用的文献

4
Hebbian learning and predictive mirror neurons for actions, sensations and emotions.用于动作、感觉和情感的赫布学习与预测性镜像神经元。
Philos Trans R Soc Lond B Biol Sci. 2014 Apr 28;369(1644):20130175. doi: 10.1098/rstb.2013.0175. Print 2014.
5
Mirror neurons: from origin to function.镜像神经元:从起源到功能。
Behav Brain Sci. 2014 Apr;37(2):177-92. doi: 10.1017/S0140525X13000903.
9
Visual speech form influences the speed of auditory speech processing.视觉言语形式影响听觉言语处理的速度。
Brain Lang. 2013 Sep;126(3):350-6. doi: 10.1016/j.bandl.2013.06.008. Epub 2013 Aug 11.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验