Suppr超能文献

情境中的情绪声音:多模态情感信息处理的神经生物学模型。

Emotional voices in context: a neurobiological model of multimodal affective information processing.

机构信息

Department of Psychiatry and Psychotherapy, University of Tübingen, Calwerstraße 14, 72076 Tübingen, Germany.

出版信息

Phys Life Rev. 2011 Dec;8(4):383-403. doi: 10.1016/j.plrev.2011.10.002. Epub 2011 Oct 19.

Abstract

Just as eyes are often considered a gateway to the soul, the human voice offers a window through which we gain access to our fellow human beings' minds - their attitudes, intentions and feelings. Whether in talking or singing, crying or laughing, sighing or screaming, the sheer sound of a voice communicates a wealth of information that, in turn, may serve the observant listener as valuable guidepost in social interaction. But how do human beings extract information from the tone of a voice? In an attempt to answer this question, the present article reviews empirical evidence detailing the cerebral processes that underlie our ability to decode emotional information from vocal signals. The review will focus primarily on two prominent classes of vocal emotion cues: laughter and speech prosody (i.e. the tone of voice while speaking). Following a brief introduction, behavioral as well as neuroimaging data will be summarized that allows to outline cerebral mechanisms associated with the decoding of emotional voice cues, as well as the influence of various context variables (e.g. co-occurring facial and verbal emotional signals, attention focus, person-specific parameters such as gender and personality) on the respective processes. Building on the presented evidence, a cerebral network model will be introduced that proposes a differential contribution of various cortical and subcortical brain structures to the processing of emotional voice signals both in isolation and in context of accompanying (facial and verbal) emotional cues.

摘要

正如眼睛常被认为是心灵的窗口一样,人类的声音也为我们提供了一扇通往他人心灵的窗户——通过这扇窗户,我们可以了解到他们的态度、意图和感受。无论是说话、唱歌、哭泣、欢笑、叹息还是尖叫,声音本身都传达了丰富的信息,而这些信息反过来又可以为善于观察的听众在社交互动中提供有价值的指导。但是,人类是如何从声音的语调中提取信息的呢?为了回答这个问题,本文综述了一些实证证据,详细说明了我们从声音信号中解码情感信息的大脑过程。综述将主要集中在两类突出的声音情感线索上:笑声和言语韵律(即说话时的语调)。在简要介绍之后,将总结行为和神经影像学数据,这些数据可以勾勒出与解码情感声音线索相关的大脑机制,以及各种上下文变量(例如,同时出现的面部和言语情感信号、注意力焦点、特定于人的参数,如性别和个性)对相应过程的影响。基于所呈现的证据,将引入一个大脑网络模型,该模型提出了各种皮质和皮质下结构对情感声音信号的处理的差异贡献,无论是在孤立的情况下还是在伴随的(面部和言语)情感线索的情况下。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验