Suppr超能文献

人类新生儿大脑中的声音和情感处理。

Voice and emotion processing in the human neonatal brain.

机构信息

Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan.

出版信息

J Cogn Neurosci. 2012 Jun;24(6):1411-9. doi: 10.1162/jocn_a_00214. Epub 2012 Feb 23.

Abstract

Although the voice-sensitive neural system emerges very early in development, it has yet to be demonstrated whether the neonatal brain is sensitive to voice perception. We measured the EEG mismatch response (MMR) elicited by emotionally spoken syllables "dada" along with correspondingly synthesized nonvocal sounds, whose fundamental frequency contours were matched, in 98 full-term newborns aged 1-5 days. In Experiment 1, happy syllables relative to nonvocal sounds elicited an MMR lateralized to the right hemisphere. In Experiment 2, fearful syllables elicited stronger amplitudes than happy or neutral syllables, and this response had no sex differences. In Experiment 3, angry versus happy syllables elicited an MMR, although their corresponding nonvocal sounds did not. Here, we show that affective discrimination is selectively driven by voice processing per se rather than low-level acoustical features and that the cerebral specialization for human voice and emotion processing emerges over the right hemisphere during the first days of life.

摘要

虽然语音敏感的神经系统在发育早期就出现了,但尚未证明新生儿的大脑是否对语音感知敏感。我们在 98 名出生 1-5 天的足月新生儿中测量了由情感化的音节“dada”以及相应合成的非语音声音引起的脑电失匹配反应(MMR),这些声音的基频轮廓相匹配。在实验 1 中,相对于非语音声音的快乐音节引起了偏向右侧半球的 MMR。在实验 2 中,与快乐或中性音节相比,恐惧音节引起了更强的振幅,并且这种反应没有性别差异。在实验 3 中,与快乐音节相比,愤怒音节引起了 MMR,尽管它们相应的非语音声音没有引起 MMR。在这里,我们表明,情感辨别是由语音处理本身而非低水平的声学特征驱动的,并且人类语音和情感处理的大脑特化在生命的头几天就出现在右半球。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验