Suppr超能文献

婴儿大脑中来自面部和声音的情绪信息的跨模态整合。

Crossmodal integration of emotional information from face and voice in the infant brain.

作者信息

Grossmann Tobias, Striano Tricia, Friederici Angela D

机构信息

Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.

出版信息

Dev Sci. 2006 May;9(3):309-15. doi: 10.1111/j.1467-7687.2006.00494.x.

Abstract

We examined 7-month-old infants' processing of emotionally congruent and incongruent face-voice pairs using ERP measures. Infants watched facial expressions (happy or angry) and, after a delay of 400 ms, heard a word spoken with a prosody that was either emotionally congruent or incongruent with the face being presented. The ERP data revealed that the amplitude of a negative component and a subsequent positive component in infants' ERPs varied as a function of crossmodal emotional congruity. An emotionally incongruent prosody elicited a larger negative component in infants' ERPs than did an emotionally congruent prosody. Conversely, the amplitude of infants' positive component was larger to emotionally congruent than to incongruent prosody. Previous work has shown that an attenuation of the negative component and an enhancement of the later positive component in infants' ERPs reflect the recognition of an item. Thus, the current findings suggest that 7-month-olds integrate emotional information across modalities and recognize common affect in the face and voice.

摘要

我们使用事件相关电位(ERP)测量方法,研究了7个月大婴儿对情感匹配和不匹配的面部-声音对的处理情况。婴儿观看面部表情(高兴或愤怒),并在延迟400毫秒后,听到一个带有韵律的单词,该韵律与所呈现的面部表情在情感上要么匹配要么不匹配。ERP数据显示,婴儿ERP中一个负成分和随后的一个正成分的幅度会因跨通道情感匹配性而变化。情感不匹配的韵律在婴儿ERP中引发的负成分比情感匹配的韵律更大。相反,婴儿正成分的幅度在情感匹配时比不匹配时更大。先前的研究表明,婴儿ERP中负成分的减弱和后期正成分的增强反映了对一个项目的识别。因此,目前的研究结果表明,7个月大的婴儿能够跨通道整合情感信息,并识别面部和声音中的共同情感。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验