• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

视听模态下情感语音的整合交互

Integrative interaction of emotional speech in audio-visual modality.

作者信息

Dong Haibin, Li Na, Fan Lingzhong, Wei Jianguo, Xu Junhai

机构信息

Tianjin Key Lab of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China.

Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China.

出版信息

Front Neurosci. 2022 Nov 11;16:797277. doi: 10.3389/fnins.2022.797277. eCollection 2022.

DOI:10.3389/fnins.2022.797277
PMID:36440282
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9695733/
Abstract

Emotional clues are always expressed in many ways in our daily life, and the emotional information we receive is often represented by multiple modalities. Successful social interactions require a combination of multisensory cues to accurately determine the emotion of others. The integration mechanism of multimodal emotional information has been widely investigated. Different brain activity measurement methods were used to determine the location of brain regions involved in the audio-visual integration of emotional information, mainly in the bilateral superior temporal regions. However, the methods adopted in these studies are relatively simple, and the materials of the study rarely contain speech information. The integration mechanism of emotional speech in the human brain still needs further examinations. In this paper, a functional magnetic resonance imaging (fMRI) study was conducted using event-related design to explore the audio-visual integration mechanism of emotional speech in the human brain by using dynamic facial expressions and emotional speech to express emotions of different valences. Representational similarity analysis (RSA) based on regions of interest (ROIs), whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis were used to analyze and verify the role of relevant brain regions. Meanwhile, a weighted RSA method was used to evaluate the contributions of each candidate model in the best fitted model of ROIs. The results showed that only the left insula was detected by all methods, suggesting that the left insula played an important role in the audio-visual integration of emotional speech. Whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis together revealed that the bilateral middle temporal gyrus (MTG), right inferior parietal lobule and bilateral precuneus might be involved in the audio-visual integration of emotional speech from other aspects.

摘要

情绪线索在我们日常生活中总是通过多种方式表现出来,并且我们所接收的情绪信息通常由多种模态来呈现。成功的社交互动需要多种感官线索的组合,以便准确地判断他人的情绪。多模态情绪信息的整合机制已得到广泛研究。人们使用不同的大脑活动测量方法来确定参与情绪信息视听整合的脑区位置,主要位于双侧颞上区。然而,这些研究采用的方法相对简单,且研究材料很少包含语音信息。人类大脑中情绪语音的整合机制仍需进一步研究。在本文中,采用事件相关设计进行了一项功能磁共振成像(fMRI)研究,通过使用动态面部表情和情绪语音来表达不同效价的情绪,以探索人类大脑中情绪语音的视听整合机制。基于感兴趣区域(ROIs)的表征相似性分析(RSA)、全脑搜索光分析、模态联合分析和超加性分析被用于分析和验证相关脑区的作用。同时,使用加权RSA方法来评估每个候选模型在ROIs最佳拟合模型中的贡献。结果表明,所有方法仅检测到左侧脑岛,这表明左侧脑岛在情绪语音的视听整合中发挥着重要作用。全脑搜索光分析、模态联合分析和超加性分析共同揭示,双侧颞中回(MTG)、右侧顶下小叶和双侧楔前叶可能从其他方面参与了情绪语音的视听整合。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/319677d5b13d/fnins-16-797277-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/f09d092b4bea/fnins-16-797277-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/79a47cb17521/fnins-16-797277-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/774422a18cb5/fnins-16-797277-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/2b4a0f80ce41/fnins-16-797277-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/319677d5b13d/fnins-16-797277-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/f09d092b4bea/fnins-16-797277-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/79a47cb17521/fnins-16-797277-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/774422a18cb5/fnins-16-797277-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/2b4a0f80ce41/fnins-16-797277-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1fe/9695733/319677d5b13d/fnins-16-797277-g005.jpg

相似文献

1
Integrative interaction of emotional speech in audio-visual modality.视听模态下情感语音的整合交互
Front Neurosci. 2022 Nov 11;16:797277. doi: 10.3389/fnins.2022.797277. eCollection 2022.
2
Weighted RSA: An Improved Framework on the Perception of Audio-visual Affective Speech in Left Insula and Superior Temporal Gyrus.加权RSA:左脑岛和颞上回视听情感语音感知的改进框架。
Neuroscience. 2021 Aug 10;469:46-58. doi: 10.1016/j.neuroscience.2021.06.002. Epub 2021 Jun 11.
3
Multisensory and modality specific processing of visual speech in different regions of the premotor cortex.运动前皮质不同区域对视觉言语的多感觉和模态特异性加工。
Front Psychol. 2014 May 5;5:389. doi: 10.3389/fpsyg.2014.00389. eCollection 2014.
4
Integration of cross-modal emotional information in the human brain: an fMRI study.跨模态情感信息在人类大脑中的整合:一项 fMRI 研究。
Cortex. 2010 Feb;46(2):161-9. doi: 10.1016/j.cortex.2008.06.008. Epub 2008 Jun 29.
5
Neural correlates of successful emotional episodic encoding and retrieval: An SDM meta-analysis of neuroimaging studies.情绪情景式记忆成功编码和提取的神经关联:神经影像学研究的 SDM 元分析。
Neuropsychologia. 2020 Jun;143:107495. doi: 10.1016/j.neuropsychologia.2020.107495. Epub 2020 May 13.
6
Modality-general representations of valences perceived from visual and auditory modalities.从视觉和听觉模式感知到的情感的模态通用表示。
Neuroimage. 2019 Dec;203:116199. doi: 10.1016/j.neuroimage.2019.116199. Epub 2019 Sep 16.
7
Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG.使用 MEG 研究表明右侧 STS 参与情感语音的视听整合。
PLoS One. 2013 Aug 12;8(8):e70648. doi: 10.1371/journal.pone.0070648. eCollection 2013.
8
An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI.一种用于区分神经元和区域汇聚的加法因子设计:使用功能磁共振成像测量音频、视觉和触觉感觉流之间的多感官相互作用。
Exp Brain Res. 2009 Sep;198(2-3):183-94. doi: 10.1007/s00221-009-1783-8. Epub 2009 Apr 8.
9
Superior temporal activation in response to dynamic audio-visual emotional cues.对动态视听情绪线索的颞上叶激活。
Brain Cogn. 2009 Mar;69(2):269-78. doi: 10.1016/j.bandc.2008.08.007. Epub 2008 Sep 21.
10
The connectivity signature of co-speech gesture integration: The superior temporal sulcus modulates connectivity between areas related to visual gesture and auditory speech processing.协同言语手势整合的连通性特征:上颞回调节与视觉手势和听觉言语处理相关的区域之间的连通性。
Neuroimage. 2018 Nov 1;181:539-549. doi: 10.1016/j.neuroimage.2018.07.037. Epub 2018 Jul 17.

引用本文的文献

1
Emulating sensation by bridging neuromorphic computing and multisensory integration.通过连接神经形态计算与多感官整合来模拟感觉。
Patterns (N Y). 2025 Apr 29;6(7):101238. doi: 10.1016/j.patter.2025.101238. eCollection 2025 Jul 11.
2
Sensory alterations in post-traumatic stress disorder.创伤后应激障碍中的感觉改变。
Curr Opin Neurobiol. 2024 Feb;84:102821. doi: 10.1016/j.conb.2023.102821. Epub 2023 Dec 13.

本文引用的文献

1
Weighted RSA: An Improved Framework on the Perception of Audio-visual Affective Speech in Left Insula and Superior Temporal Gyrus.加权RSA:左脑岛和颞上回视听情感语音感知的改进框架。
Neuroscience. 2021 Aug 10;469:46-58. doi: 10.1016/j.neuroscience.2021.06.002. Epub 2021 Jun 11.
2
Investigation of an emotion perception test using functional magnetic resonance imaging.使用功能磁共振成像进行情绪感知测试的研究。
Comput Methods Programs Biomed. 2019 Oct;179:104994. doi: 10.1016/j.cmpb.2019.104994. Epub 2019 Jul 23.
3
Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings.
人类杏仁核对单一感官和多感官情绪输入的反应:颅内记录无超相加证据。
Neuropsychologia. 2019 Aug;131:9-24. doi: 10.1016/j.neuropsychologia.2019.05.027. Epub 2019 May 31.
4
The audio-visual integration effect on music emotion: Behavioral and physiological evidence.视听整合对音乐情绪的影响:行为和生理证据。
PLoS One. 2019 May 30;14(5):e0217040. doi: 10.1371/journal.pone.0217040. eCollection 2019.
5
Inefficient Involvement of Insula in Sensorineural Hearing Loss.岛叶在感音神经性听力损失中的低效参与。
Front Neurosci. 2019 Feb 20;13:133. doi: 10.3389/fnins.2019.00133. eCollection 2019.
6
The connectivity signature of co-speech gesture integration: The superior temporal sulcus modulates connectivity between areas related to visual gesture and auditory speech processing.协同言语手势整合的连通性特征:上颞回调节与视觉手势和听觉言语处理相关的区域之间的连通性。
Neuroimage. 2018 Nov 1;181:539-549. doi: 10.1016/j.neuroimage.2018.07.037. Epub 2018 Jul 17.
7
Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.人类大脑在自主头部运动期间对视觉和非视觉自身运动线索的整合。
Neuroimage. 2018 May 15;172:597-607. doi: 10.1016/j.neuroimage.2018.02.006. Epub 2018 Feb 7.
8
Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks.面对复杂情感:对面部情感表达的分析性和整体性感知涉及不同的脑网络。
Neuroimage. 2016 Nov 1;141:154-173. doi: 10.1016/j.neuroimage.2016.07.004. Epub 2016 Jul 5.
9
Crossmodal adaptation in right posterior superior temporal sulcus during face-voice emotional integration.右后上颞叶沟在面孔-声音情绪整合过程中的跨模态适应。
J Neurosci. 2014 May 14;34(20):6813-21. doi: 10.1523/JNEUROSCI.4478-13.2014.
10
Precuneus is a functional core of the default-mode network.楔前叶是默认模式网络的功能核心。
J Neurosci. 2014 Jan 15;34(3):932-40. doi: 10.1523/JNEUROSCI.4227-13.2014.