• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

右后上颞叶沟在面孔-声音情绪整合过程中的跨模态适应。

Crossmodal adaptation in right posterior superior temporal sulcus during face-voice emotional integration.

机构信息

Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6229 EV, The Netherlands, Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom,

Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, United Kingdom, Neuroscience Institute of Timone, Coeducational Research Unit 7289, National Center of Scientific Research-Aix-Marseille University, F-13284 Marseille, France.

出版信息

J Neurosci. 2014 May 14;34(20):6813-21. doi: 10.1523/JNEUROSCI.4478-13.2014.

DOI:10.1523/JNEUROSCI.4478-13.2014
PMID:24828635
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC4019796/
Abstract

The integration of emotional information from the face and voice of other persons is known to be mediated by a number of "multisensory" cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion-although there was a greater weighting of face information-and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices.

摘要

已知,来自他人面部和声音的情感信息的整合是由多个“多感官”大脑区域介导的,例如右侧后颞上沟(pSTS)。然而,这些区域中的多模态整合是归因于响应面部或声音的单感神经元的交织群体,还是归因于接收来自两种模态的输入的多模态神经元,尚不完全清楚。在这里,我们使用功能磁共振适应和动态视听刺激来检查这个问题,在这些刺激中,通过在愤怒和快乐表情之间进行变形,可以参数化和独立地在面部和声音中操纵情感信息。当对包含在快速事件相关、连续连续转移设计中的一系列此类刺激进行快乐/愤怒情绪分类任务时,健康的成年人类被试者接受了扫描。当被试者对情绪进行分类时,他们整合了面部和声音信息,尽管对面部信息的权重更大,并且在模态内和跨模态都表现出行为适应效应。适应也发生在神经水平:除了视觉和听觉皮层的特定模态适应外,我们还首次观察到跨模态适应效应。具体来说,在 pSTS 中的 fMRI 信号在响应与前一个刺激的声音情绪相似的面部情绪的刺激时减少。这些结果表明,来自面部和声音的情感信息在 pSTS 中的整合涉及到可检测到的一部分双模态神经元,这些神经元结合了来自视觉和听觉皮层的输入。

相似文献

1
Crossmodal adaptation in right posterior superior temporal sulcus during face-voice emotional integration.右后上颞叶沟在面孔-声音情绪整合过程中的跨模态适应。
J Neurosci. 2014 May 14;34(20):6813-21. doi: 10.1523/JNEUROSCI.4478-13.2014.
2
Affect differentially modulates brain activation in uni- and multisensory body-voice perception.情感在单感官和多感官的身体-声音感知中对大脑激活产生不同的调节作用。
Neuropsychologia. 2015 Jan;66:134-43. doi: 10.1016/j.neuropsychologia.2014.10.038. Epub 2014 Nov 4.
3
Association of trait emotional intelligence and individual fMRI-activation patterns during the perception of social signals from voice and face.特质情绪智力与个体在感知来自声音和面部的社会信号时的 fMRI 激活模式之间的关联。
Hum Brain Mapp. 2010 Jul;31(7):979-91. doi: 10.1002/hbm.20913.
4
Auditory and visual modulation of temporal lobe neurons in voice-sensitive and association cortices.听觉和视觉对颞叶神经元的调制作用,在声音敏感和联合皮质中。
J Neurosci. 2014 Feb 12;34(7):2524-37. doi: 10.1523/JNEUROSCI.2805-13.2014.
5
People-selectivity, audiovisual integration and heteromodality in the superior temporal sulcus.优势颞回中的个体选择性、视听整合和异模态
Cortex. 2014 Jan;50:125-36. doi: 10.1016/j.cortex.2013.07.011. Epub 2013 Aug 2.
6
Cerebral representation of non-verbal emotional perception: fMRI reveals audiovisual integration area between voice- and face-sensitive regions in the superior temporal sulcus.非言语情感感知的大脑表征:fMRI 揭示了颞上沟中声音和面孔敏感区域之间的视听整合区域。
Neuropsychologia. 2009 Dec;47(14):3059-66. doi: 10.1016/j.neuropsychologia.2009.07.001. Epub 2009 Jul 21.
7
Activation in the angular gyrus and in the pSTS is modulated by face primes during voice recognition.在语音识别过程中,角回和颞上沟后部的激活受面部启动刺激的调节。
Hum Brain Mapp. 2017 May;38(5):2553-2565. doi: 10.1002/hbm.23540. Epub 2017 Feb 20.
8
Audiovisual integration of emotional signals in voice and face: an event-related fMRI study.语音和面部情感信号的视听整合:一项事件相关功能磁共振成像研究。
Neuroimage. 2007 Oct 1;37(4):1445-56. doi: 10.1016/j.neuroimage.2007.06.020. Epub 2007 Jul 4.
9
Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.跨模态整合增强了视听面部感知中与任务相关特征的神经表征。
Cereb Cortex. 2015 Feb;25(2):384-95. doi: 10.1093/cercor/bht228. Epub 2013 Aug 26.
10
The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities.《耶拿情感伪语音变声视听刺激库(JAVMEPS)》:一个包含情感纯听觉、纯视觉以及与声音强度变化的语音和动态面部相匹配和不匹配的视听声音刺激的数据库。
Behav Res Methods. 2024 Aug;56(5):5103-5115. doi: 10.3758/s13428-023-02249-4. Epub 2023 Oct 11.

引用本文的文献

1
Neural representations of naturalistic person identities while watching a feature film.观看故事片时对自然主义人物身份的神经表征。
Imaging Neurosci (Camb). 2023 Aug 21;1. doi: 10.1162/imag_a_00009. eCollection 2023.
2
Differential brain activation and network connectivity in social interactions presence and absence of physical contact.在有身体接触和无身体接触的社交互动中大脑激活和网络连接的差异。
Commun Biol. 2025 Jul 2;8(1):986. doi: 10.1038/s42003-025-08417-w.
3
Setting the tone: crossmodal emotional face-voice combinations in continuous flash suppression.奠定基调:连续闪光抑制中的跨模态情绪面部-声音组合
Front Psychol. 2025 Jan 16;15:1472489. doi: 10.3389/fpsyg.2024.1472489. eCollection 2024.
4
Neural Modulation Alteration to Positive and Negative Emotions in Depressed Patients: Insights from fMRI Using Positive/Negative Emotion Atlas.抑郁症患者正负性情绪的神经调制改变:基于使用正负性情绪图谱的功能磁共振成像研究的见解
Tomography. 2024 Dec 9;10(12):2014-2037. doi: 10.3390/tomography10120144.
5
Distributed network flows generate localized category selectivity in human visual cortex.分布式网络流在人类视觉皮层中产生局部类别选择性。
PLoS Comput Biol. 2024 Oct 22;20(10):e1012507. doi: 10.1371/journal.pcbi.1012507. eCollection 2024 Oct.
6
The Left Amygdala and Right Frontoparietal Cortex Support Emotional Adaptation Aftereffects.左侧杏仁核与右侧额顶叶皮层支持情绪适应后效应。
Brain Sci. 2024 Mar 6;14(3):257. doi: 10.3390/brainsci14030257.
7
Dissecting abstract, modality-specific and experience-dependent coding of affect in the human brain.剖析人类大脑中情感的抽象、模态特异性和经验依赖性编码。
Sci Adv. 2024 Mar 8;10(10):eadk6840. doi: 10.1126/sciadv.adk6840.
8
Differential effects of intra-modal and cross-modal reward value on perception: ERP evidence.内模态和跨模态奖励值对感知的差异影响:ERP 证据。
PLoS One. 2023 Jun 30;18(6):e0287900. doi: 10.1371/journal.pone.0287900. eCollection 2023.
9
Integrative interaction of emotional speech in audio-visual modality.视听模态下情感语音的整合交互
Front Neurosci. 2022 Nov 11;16:797277. doi: 10.3389/fnins.2022.797277. eCollection 2022.
10
Emotional prosody recognition is impaired in Alzheimer's disease.阿尔茨海默病患者的情感韵律识别能力受损。
Alzheimers Res Ther. 2022 Apr 5;14(1):50. doi: 10.1186/s13195-022-00989-7.

本文引用的文献

1
Dissociating task difficulty from incongruence in face-voice emotion integration.从面部表情和声音情绪整合的不一致中分离出任务难度。
Front Hum Neurosci. 2013 Nov 13;7:744. doi: 10.3389/fnhum.2013.00744. eCollection 2013.
2
Adaptation aftereffects in vocal emotion perception elicited by expressive faces and voices.表情和声音引发的声乐情绪感知适应后效。
PLoS One. 2013 Nov 13;8(11):e81691. doi: 10.1371/journal.pone.0081691. eCollection 2013.
3
People-selectivity, audiovisual integration and heteromodality in the superior temporal sulcus.优势颞回中的个体选择性、视听整合和异模态
Cortex. 2014 Jan;50:125-36. doi: 10.1016/j.cortex.2013.07.011. Epub 2013 Aug 2.
4
Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG.使用 MEG 研究表明右侧 STS 参与情感语音的视听整合。
PLoS One. 2013 Aug 12;8(8):e70648. doi: 10.1371/journal.pone.0070648. eCollection 2013.
5
Functional responses and structural connections of cortical areas for processing faces and voices in the superior temporal sulcus.颞上沟中处理面孔和声音的皮质区域的功能反应和结构连接。
Neuroimage. 2013 Aug 1;76:45-56. doi: 10.1016/j.neuroimage.2013.02.064. Epub 2013 Mar 16.
6
Supramodal representation of emotions.情绪的超模态表示。
J Neurosci. 2011 Sep 21;31(38):13635-43. doi: 10.1523/JNEUROSCI.2833-11.2011.
7
Cerebral correlates and statistical criteria of cross-modal face and voice integration.跨模态面部与语音整合的脑关联及统计标准
Seeing Perceiving. 2011;24(4):351-67. doi: 10.1163/187847511X584452.
8
Emotional perception: meta-analyses of face and natural scene processing.情绪感知:面孔和自然场景处理的元分析。
Neuroimage. 2011 Feb 1;54(3):2524-33. doi: 10.1016/j.neuroimage.2010.10.011. Epub 2010 Oct 14.
9
Cross-modal face identity aftereffects and their relation to priming.跨模态人脸后效及其与启动的关系。
J Exp Psychol Hum Percept Perform. 2010 Aug;36(4):876-91. doi: 10.1037/a0018731.
10
Functional atlas of emotional faces processing: a voxel-based meta-analysis of 105 functional magnetic resonance imaging studies.情绪面孔加工的功能图谱:105 项功能磁共振成像研究的基于体素的荟萃分析。
J Psychiatry Neurosci. 2009 Nov;34(6):418-32.