• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

从声音、面部和身体表情中处理情绪的时间动态。

The temporal dynamics of processing emotions from vocal, facial, and bodily expressions.

机构信息

Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1A, 04107 Leipzig, Germany.

出版信息

Neuroimage. 2011 Sep 15;58(2):665-74. doi: 10.1016/j.neuroimage.2011.06.035. Epub 2011 Jun 22.

DOI:10.1016/j.neuroimage.2011.06.035
PMID:21718792
Abstract

Face-to-face communication works multimodally. Not only do we employ vocal and facial expressions; body language provides valuable information as well. Here we focused on multimodal perception of emotion expressions, monitoring the temporal unfolding of the interaction of different modalities in the electroencephalogram (EEG). In the auditory condition, participants listened to emotional interjections such as "ah", while they saw mute video clips containing emotional body language in the visual condition. In the audiovisual condition participants saw video clips with matching interjections. In all three conditions, the emotions "anger" and "fear", as well as non-emotional stimuli were used. The N100 amplitude was strongly reduced in the audiovisual compared to the auditory condition, suggesting a significant impact of visual information on early auditory processing. Furthermore, anger and fear expressions were distinct in the auditory but not the audiovisual condition. Complementing these event-related potential (ERP) findings, we report strong similarities in the alpha- and beta-band in the visual and the audiovisual conditions, suggesting a strong visual processing component in the perception of audiovisual stimuli. Overall, our results show an early interaction of modalities in emotional face-to-face communication using complex and highly natural stimuli.

摘要

面对面交流是多模态的。我们不仅使用声音和面部表情;肢体语言也提供了有价值的信息。在这里,我们专注于情绪表达的多模态感知,监测脑电图 (EEG) 中不同模态相互作用的时间展开。在听觉条件下,参与者听了“啊”等情绪插入语,而在视觉条件下,他们看到了包含情绪肢体语言的无声视频片段。在视听条件下,参与者观看了带有匹配插入语的视频片段。在所有三种情况下,都使用了“愤怒”和“恐惧”等情绪以及非情绪刺激。与听觉条件相比,视听条件下的 N100 振幅明显降低,这表明视觉信息对早期听觉处理有重大影响。此外,听觉条件下可以区分愤怒和恐惧表情,但在视听条件下则不能。补充这些事件相关电位 (ERP) 发现,我们报告了在视觉和视听条件下在 alpha 和 beta 波段中存在强烈相似性,这表明在感知视听刺激时存在强烈的视觉处理成分。总的来说,我们的研究结果表明,在使用复杂且高度自然的刺激进行面对面的情感交流中,模式之间存在早期相互作用。

相似文献

1
The temporal dynamics of processing emotions from vocal, facial, and bodily expressions.从声音、面部和身体表情中处理情绪的时间动态。
Neuroimage. 2011 Sep 15;58(2):665-74. doi: 10.1016/j.neuroimage.2011.06.035. Epub 2011 Jun 22.
2
Audiovisual integration of emotional signals in voice and face: an event-related fMRI study.语音和面部情感信号的视听整合:一项事件相关功能磁共振成像研究。
Neuroimage. 2007 Oct 1;37(4):1445-56. doi: 10.1016/j.neuroimage.2007.06.020. Epub 2007 Jul 4.
3
Emotional face expressions are differentiated with brain oscillations.情绪性面部表情可通过脑振荡加以区分。
Int J Psychophysiol. 2007 Apr;64(1):91-100. doi: 10.1016/j.ijpsycho.2006.07.003. Epub 2006 Dec 5.
4
The role of emotion in dynamic audiovisual integration of faces and voices.情感在面部与声音动态视听整合中的作用。
Soc Cogn Affect Neurosci. 2015 May;10(5):713-20. doi: 10.1093/scan/nsu105. Epub 2014 Aug 20.
5
The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.在检测听觉目标时对情绪视觉刺激的选择性加工:一项事件相关电位分析
Brain Res. 2008 Sep 16;1230:168-76. doi: 10.1016/j.brainres.2008.07.024. Epub 2008 Jul 15.
6
Emotional object and scene stimuli modulate subsequent face processing: an event-related potential study.情绪物体和场景刺激调节后续的面孔加工:一项事件相关电位研究。
Brain Res Bull. 2008 Nov 25;77(5):264-73. doi: 10.1016/j.brainresbull.2008.08.011. Epub 2008 Sep 13.
7
Don't look at me in anger! Enhanced processing of angry faces in anticipation of public speaking.不要生气地看着我!预期公开演讲时对愤怒面孔的增强处理。
Psychophysiology. 2010 Mar 1;47(2):271-80. doi: 10.1111/j.1469-8986.2009.00938.x. Epub 2009 Dec 16.
8
Responses of single neurons in monkey amygdala to facial and vocal emotions.猴子杏仁核中单个神经元对面部和声音情绪的反应。
J Neurophysiol. 2007 Feb;97(2):1379-87. doi: 10.1152/jn.00464.2006. Epub 2006 Dec 20.
9
Time course of implicit processing and explicit processing of emotional faces and emotional words.情绪面孔和情绪词的内隐和外显加工的时间进程。
Biol Psychol. 2011 May;87(2):265-74. doi: 10.1016/j.biopsycho.2011.03.008. Epub 2011 Mar 31.
10
Selective attention modulates early human evoked potentials during emotional face-voice processing.在情绪性面部-声音加工过程中,选择性注意调节早期人类诱发电位。
J Cogn Neurosci. 2015 Apr;27(4):798-818. doi: 10.1162/jocn_a_00734. Epub 2014 Sep 30.

引用本文的文献

1
The role of cognitive load in automatic integration of emotional information from face and body.认知负荷在面部和身体情感信息自动整合中的作用。
Sci Rep. 2025 Aug 1;15(1):28184. doi: 10.1038/s41598-025-12511-8.
2
Vocal Emotion Perception and Musicality-Insights from EEG Decoding.语音情感感知与音乐性——来自脑电图解码的见解
Sensors (Basel). 2025 Mar 8;25(6):1669. doi: 10.3390/s25061669.
3
Altered processing of consecutive changeable emotional voices in individuals with autistic traits: behavioral and ERP studies.具有自闭症特质个体对连续可变情感声音的加工改变:行为学和事件相关电位研究
BMC Psychol. 2025 Mar 17;13(1):261. doi: 10.1186/s40359-025-02452-2.
4
The dissociating effects of fear and disgust on multisensory integration in autism: evidence from evoked potentials.恐惧和厌恶对自闭症多感官整合的解离作用:来自诱发电位的证据。
Front Neurosci. 2024 Aug 5;18:1390696. doi: 10.3389/fnins.2024.1390696. eCollection 2024.
5
Multisensory integration of speech and gestures in a naturalistic paradigm.言语和手势的自然范式中的多感觉整合。
Hum Brain Mapp. 2024 Aug 1;45(11):e26797. doi: 10.1002/hbm.26797.
6
Perceptual integration of bodily and facial emotion cues in chimpanzees and humans.黑猩猩和人类对身体及面部情绪线索的感知整合
PNAS Nexus. 2024 Jan 18;3(2):pgae012. doi: 10.1093/pnasnexus/pgae012. eCollection 2024 Feb.
7
The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities.《耶拿情感伪语音变声视听刺激库(JAVMEPS)》:一个包含情感纯听觉、纯视觉以及与声音强度变化的语音和动态面部相匹配和不匹配的视听声音刺激的数据库。
Behav Res Methods. 2024 Aug;56(5):5103-5115. doi: 10.3758/s13428-023-02249-4. Epub 2023 Oct 11.
8
Automatic Brain Categorization of Discrete Auditory Emotion Expressions.自动离散听觉情绪表达的大脑分类。
Brain Topogr. 2023 Nov;36(6):854-869. doi: 10.1007/s10548-023-00983-8. Epub 2023 Aug 28.
9
Brain-to-brain mechanisms underlying pain empathy and social modulation of pain in the patient-clinician interaction.在医患互动中,疼痛共情和疼痛的社会调节的大脑间机制。
Proc Natl Acad Sci U S A. 2023 Jun 27;120(26):e2212910120. doi: 10.1073/pnas.2212910120. Epub 2023 Jun 20.
10
Emotional scene processing in biotypes of psychosis.精神病生物型的情绪场景处理。
Psychiatry Res. 2023 Jun;324:115227. doi: 10.1016/j.psychres.2023.115227. Epub 2023 Apr 24.