Suppr超能文献

视听模态下情感语音的整合交互

Integrative interaction of emotional speech in audio-visual modality.

作者信息

Dong Haibin, Li Na, Fan Lingzhong, Wei Jianguo, Xu Junhai

机构信息

Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China.

Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China.

出版信息

Front Neurosci. 2022 Nov 11;16:797277. doi: 10.3389/fnins.2022.797277. eCollection 2022.

Abstract

Emotional clues are always expressed in many ways in our daily life, and the emotional information we receive is often represented by multiple modalities. Successful social interactions require a combination of multisensory cues to accurately determine the emotion of others. The integration mechanism of multimodal emotional information has been widely investigated. Different brain activity measurement methods were used to determine the location of brain regions involved in the audio-visual integration of emotional information, mainly in the bilateral superior temporal regions. However, the methods adopted in these studies are relatively simple, and the materials of the study rarely contain speech information. The integration mechanism of emotional speech in the human brain still needs further examinations. In this paper, a functional magnetic resonance imaging (fMRI) study was conducted using event-related design to explore the audio-visual integration mechanism of emotional speech in the human brain by using dynamic facial expressions and emotional speech to express emotions of different valences. Representational similarity analysis (RSA) based on regions of interest (ROIs), whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis were used to analyze and verify the role of relevant brain regions. Meanwhile, a weighted RSA method was used to evaluate the contributions of each candidate model in the best fitted model of ROIs. The results showed that only the left insula was detected by all methods, suggesting that the left insula played an important role in the audio-visual integration of emotional speech. Whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis together revealed that the bilateral middle temporal gyrus (MTG), right inferior parietal lobule and bilateral precuneus might be involved in the audio-visual integration of emotional speech from other aspects.

摘要

情绪线索在我们日常生活中总是通过多种方式表现出来,并且我们所接收的情绪信息通常由多种模态来呈现。成功的社交互动需要多种感官线索的组合,以便准确地判断他人的情绪。多模态情绪信息的整合机制已得到广泛研究。人们使用不同的大脑活动测量方法来确定参与情绪信息视听整合的脑区位置,主要位于双侧颞上区。然而,这些研究采用的方法相对简单,且研究材料很少包含语音信息。人类大脑中情绪语音的整合机制仍需进一步研究。在本文中,采用事件相关设计进行了一项功能磁共振成像(fMRI)研究,通过使用动态面部表情和情绪语音来表达不同效价的情绪,以探索人类大脑中情绪语音的视听整合机制。基于感兴趣区域(ROIs)的表征相似性分析(RSA)、全脑搜索光分析、模态联合分析和超加性分析被用于分析和验证相关脑区的作用。同时,使用加权RSA方法来评估每个候选模型在ROIs最佳拟合模型中的贡献。结果表明,所有方法仅检测到左侧脑岛,这表明左侧脑岛在情绪语音的视听整合中发挥着重要作用。全脑搜索光分析、模态联合分析和超加性分析共同揭示,双侧颞中回(MTG)、右侧顶下小叶和双侧楔前叶可能从其他方面参与了情绪语音的视听整合。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/f09d092b4bea/fnins-16-797277-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验