Centre for Brain and Cognitive Development Birkbeck, University of London, London, UK.
Restor Neurol Neurosci. 2010;28(2):219-36. doi: 10.3233/RNN-2010-0499.
Interacting with others by reading their emotional expressions is an essential social skill in humans. How this ability develops during infancy and what brain processes underpin infants' perception of emotion in different modalities are the questions dealt with in this paper.
Literature review.
The first part provides a systematic review of behavioral findings on infants' developing emotion-reading abilities. The second part presents a set of new electrophysiological studies that provide insights into the brain processes underlying infants' developing abilities. Throughout, evidence from unimodal (face or voice) and multimodal (face and voice) processing of emotion is considered. The implications of the reviewed findings for our understanding of developmental models of emotion processing are discussed.
The reviewed infant data suggest that (a) early in development, emotion enhances the sensory processing of faces and voices, (b) infants' ability to allocate increased attentional resources to negative emotional information develops earlier in the vocal domain than in the facial domain, and (c) at least by the age of 7 months, infants reliably match and recognize emotional information across face and voice.
通过阅读他人的情感表达与他人互动是人类的一项基本社交技能。本文旨在探讨这种能力在婴儿期是如何发展的,以及大脑在不同模式下感知情绪的过程是什么。
文献回顾。
第一部分系统地回顾了关于婴儿发展情绪阅读能力的行为发现。第二部分介绍了一系列新的电生理研究,这些研究提供了婴儿发展能力的大脑过程的见解。整篇文章都考虑了来自单一模态(面部或声音)和多模态(面部和声音)处理情绪的证据。讨论了所审查发现对我们理解情绪处理发展模型的意义。
回顾的婴儿数据表明:(a)在发育早期,情绪会增强对面孔和声音的感官处理,(b)婴儿将更多注意力资源分配给负面情绪信息的能力在声音领域比在面部领域更早发展,(c)至少在 7 个月大时,婴儿能够可靠地匹配和识别跨面部和声音的情绪信息。