• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

听觉和视觉言语情绪激活左侧预备运动区。

Audio and visual speech emotion activate the left pre-supplementary motor area.

机构信息

Department of Psychology, Ryerson University, Toronto, ON, M5B 2K3, Canada.

Department of Psychology, Western University, London, ON, Canada.

出版信息

Cogn Affect Behav Neurosci. 2022 Apr;22(2):291-303. doi: 10.3758/s13415-021-00961-2. Epub 2021 Nov 22.

DOI:10.3758/s13415-021-00961-2
PMID:34811708
Abstract

Sensorimotor brain areas have been implicated in the recognition of emotion expressed on the face and through nonverbal vocalizations. However, no previous study has assessed whether sensorimotor cortices are recruited during the perception of emotion in speech-a signal that includes both audio (speech sounds) and visual (facial speech movements) components. To address this gap in the literature, we recruited 24 participants to listen to speech clips produced in a way that was either happy, sad, or neutral in expression. These stimuli also were presented in one of three modalities: audio-only (hearing the voice but not seeing the face), video-only (seeing the face but not hearing the voice), or audiovisual. Brain activity was recorded using electroencephalography, subjected to independent component analysis, and source-localized. We found that the left presupplementary motor area was more active in response to happy and sad stimuli than neutral stimuli, as indexed by greater mu event-related desynchronization. This effect did not differ by the sensory modality of the stimuli. Activity levels in other sensorimotor brain areas did not differ by emotion, although they were greatest in response to visual-only and audiovisual stimuli. One possible explanation for the pre-SMA result is that this brain area may actively support speech emotion recognition by using our extensive experience expressing emotion to generate sensory predictions that in turn guide our perception.

摘要

感觉运动脑区与面部表情和非言语发声的情绪识别有关。然而,之前的研究尚未评估感觉运动皮质是否会在言语情绪感知过程中被招募,而这种信号同时包含音频(言语声音)和视觉(面部言语运动)成分。为了解决文献中的这一空白,我们招募了 24 名参与者,让他们听以快乐、悲伤或中性表情表达的言语片段。这些刺激也以三种模态呈现:仅音频(仅听到声音而看不到面部)、仅视频(仅看到面部而听不到声音)或视听。使用脑电图记录大脑活动,进行独立成分分析和源定位。我们发现,左侧预备运动区在对快乐和悲伤刺激的反应中比中性刺激更活跃,表现为更大的 mu 事件相关去同步化。这种效果不受刺激的感觉模态的影响。其他感觉运动脑区的活动水平与情绪无关,但在仅视觉和视听刺激下最大。前 SMA 结果的一个可能解释是,这个脑区可能通过利用我们表达情绪的丰富经验来主动支持言语情绪识别,从而产生感官预测,进而指导我们的感知。

相似文献

1
Audio and visual speech emotion activate the left pre-supplementary motor area.听觉和视觉言语情绪激活左侧预备运动区。
Cogn Affect Behav Neurosci. 2022 Apr;22(2):291-303. doi: 10.3758/s13415-021-00961-2. Epub 2021 Nov 22.
2
Degraded visual and auditory input individually impair audiovisual emotion recognition from speech-like stimuli, but no evidence for an exacerbated effect from combined degradation.单独的视觉和听觉输入退化会损害言语样刺激的视听情绪识别,但没有证据表明联合退化会产生更严重的影响。
Vision Res. 2021 Mar;180:51-62. doi: 10.1016/j.visres.2020.12.002. Epub 2020 Dec 24.
3
Congruent audiovisual speech enhances auditory attention decoding with EEG.视听语音一致增强了 EEG 对听觉注意力的解码。
J Neural Eng. 2019 Nov 6;16(6):066033. doi: 10.1088/1741-2552/ab4340.
4
Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.与视听一致和不一致言语以及麦格克效应相关的大脑活动的时空分布。
Brain Behav. 2015 Oct 15;5(11):e00407. doi: 10.1002/brb3.407. eCollection 2015 Nov.
5
Inside Speech: Multisensory and Modality-specific Processing of Tongue and Lip Speech Actions.内心言语:舌头和嘴唇言语动作的多感官及特定模态处理
J Cogn Neurosci. 2017 Mar;29(3):448-466. doi: 10.1162/jocn_a_01057. Epub 2016 Oct 19.
6
Crossmodal and incremental perception of audiovisual cues to emotional speech.对情感语音视听线索的跨模态和递增感知。
Lang Speech. 2010;53(Pt 1):3-30. doi: 10.1177/0023830909348993.
7
Degradation of labial information modifies audiovisual speech perception in cochlear-implanted children.唇语信息的退化改变了植入人工耳蜗的儿童对视听语音的感知。
Ear Hear. 2013 Jan-Feb;34(1):110-21. doi: 10.1097/AUD.0b013e3182670993.
8
Affect differentially modulates brain activation in uni- and multisensory body-voice perception.情感在单感官和多感官的身体-声音感知中对大脑激活产生不同的调节作用。
Neuropsychologia. 2015 Jan;66:134-43. doi: 10.1016/j.neuropsychologia.2014.10.038. Epub 2014 Nov 4.
9
Eyes on Emotion: Dynamic Gaze Allocation During Emotion Perception From Speech-Like Stimuli.关注情绪:言语样刺激感知过程中的动态注视分配。
Multisens Res. 2020 Jul 7;34(1):17-47. doi: 10.1163/22134808-bja10029.
10
Prediction across sensory modalities: A neurocomputational model of the McGurk effect.跨感觉通道的预测:麦格克效应的神经计算模型。
Cortex. 2015 Jul;68:61-75. doi: 10.1016/j.cortex.2015.04.008. Epub 2015 Apr 30.

引用本文的文献

1
Regional brain function study in patients with primary Sjögren's syndrome.原发性干燥综合征患者的脑区功能研究
Arthritis Res Ther. 2025 Apr 23;27(1):93. doi: 10.1186/s13075-025-03554-3.
2
Cortical and behavioral tracking of rhythm in music: Effects of pitch predictability, enjoyment, and expertise.音乐节奏的皮层与行为追踪:音高可预测性、愉悦感及专业技能的影响
Ann N Y Acad Sci. 2025 Apr;1546(1):120-135. doi: 10.1111/nyas.15315. Epub 2025 Mar 18.
3
A modified neural circuit framework for semantic memory retrieval with implications for circuit modulation to treat verbal retrieval deficits.

本文引用的文献

1
Development of Human Emotion Circuits Investigated Using a Big-Data Analytic Approach: Stability, Reliability, and Robustness.采用大数据分析方法研究人类情绪回路的发展:稳定性、可靠性和稳健性。
J Neurosci. 2019 Sep 4;39(36):7155-7172. doi: 10.1523/JNEUROSCI.0220-19.2019. Epub 2019 Jul 22.
2
ICLabel: An automated electroencephalographic independent component classifier, dataset, and website.ICLabel:一种自动化的脑电图独立成分分类器、数据集和网站。
Neuroimage. 2019 Sep;198:181-197. doi: 10.1016/j.neuroimage.2019.05.026. Epub 2019 May 16.
3
The role of cortical sensorimotor oscillations in action anticipation.
一种用于语义记忆检索的改良神经回路框架,对回路调制以治疗言语检索缺陷具有启示意义。
Brain Behav. 2024 May;14(5):e3490. doi: 10.1002/brb3.3490.
4
Divergent interpersonal neural synchronization patterns in the first, second language and interlingual communication.第一语言、第二语言和语言间交际中人际神经同步模式的差异。
Sci Rep. 2023 May 29;13(1):8706. doi: 10.1038/s41598-023-35923-w.
皮质感觉运动振荡在动作预测中的作用。
Neuroimage. 2017 Feb 1;146:1102-1114. doi: 10.1016/j.neuroimage.2016.10.022. Epub 2016 Oct 13.
4
Roles of Supplementary Motor Areas in Auditory Processing and Auditory Imagery.辅助运动区在听觉加工和听觉意象中的作用。
Trends Neurosci. 2016 Aug;39(8):527-542. doi: 10.1016/j.tins.2016.06.003. Epub 2016 Jul 2.
5
Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.塑造面容:感知运动模拟有助于面部表情识别。
Trends Cogn Sci. 2016 Mar;20(3):227-240. doi: 10.1016/j.tics.2015.12.010. Epub 2016 Feb 11.
6
The Extended Mirror Neuron Network: Anatomy, Origin, and Functions.扩展镜像神经元网络:解剖结构、起源及功能
Neuroscientist. 2017 Feb;23(1):56-67. doi: 10.1177/1073858415626400. Epub 2016 Jul 7.
7
Assessing human mirror activity with EEG mu rhythm: A meta-analysis.利用脑电图μ节律评估人类镜像活动:一项荟萃分析。
Psychol Bull. 2016 Mar;142(3):291-313. doi: 10.1037/bul0000031. Epub 2015 Dec 21.
8
Inducing a concurrent motor load reduces categorization precision for facial expressions.诱导同时出现的运动负荷会降低面部表情分类的精度。
J Exp Psychol Hum Percept Perform. 2016 May;42(5):706-18. doi: 10.1037/xhp0000177. Epub 2015 Nov 30.
9
Head movements encode emotions during speech and song.头部动作在言语和歌唱过程中编码情感。
Emotion. 2016 Apr;16(3):365-80. doi: 10.1037/emo0000106. Epub 2015 Oct 26.
10
The neural bases of emotion regulation.情绪调节的神经基础。
Nat Rev Neurosci. 2015 Nov;16(11):693-700. doi: 10.1038/nrn4044.