• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

时分辨别视听情绪表达。

Time-resolved discrimination of audio-visual emotion expressions.

机构信息

Institute of Research in Psychology (IPSY), Institute of Neuroscience (IoNS), University of Louvain (UCL), Louvain-la-Neuve, Belgium.

Institute of Research in Psychology (IPSY), Institute of Neuroscience (IoNS), University of Louvain (UCL), Louvain-la-Neuve, Belgium; Centre for Mind/Brain Sciences, University of Trento, Trento, Italy.

出版信息

Cortex. 2019 Oct;119:184-194. doi: 10.1016/j.cortex.2019.04.017. Epub 2019 May 6.

DOI:10.1016/j.cortex.2019.04.017
PMID:31151087
Abstract

Humans seamlessly extract and integrate the emotional content delivered by the face and the voice of others. It is however poorly understood how perceptual decisions unfold in time when people discriminate the expression of emotions transmitted using dynamic facial and vocal signals, as in natural social context. In this study, we relied on a gating paradigm to track how the recognition of emotion expressions across the senses unfold over exposure time. We first demonstrate that across all emotions tested, a discriminatory decision is reached earlier with faces than with voices. Importantly, multisensory stimulation consistently reduced the required accumulation of perceptual evidences needed to reach correct discrimination (Isolation Point). We also observed that expressions with different emotional content provide cumulative evidence at different speeds, with "fear" being the expression with the fastest isolation point across the senses. Finally, the lack of correlation between the confusion patterns in response to facial and vocal signals across time suggest distinct relations between the discriminative features extracted from the two signals. Altogether, these results provide a comprehensive view on how auditory, visual and audiovisual information related to different emotion expressions accumulate in time, highlighting how multisensory context can fasten the discrimination process when minimal information is available.

摘要

人类能够轻松地提取和整合他人面部表情和声音所传递的情感内容。然而,当人们在自然社交环境中通过动态面部和语音信号来辨别情绪表达时,人们对于感知决策如何随时间展开还知之甚少。在这项研究中,我们依赖门控范式来追踪跨感觉的情绪表达识别是如何随时间展开的。我们首先证明,在所有测试的情绪中,面孔比声音更早做出辨别决策。重要的是,多感官刺激始终会减少达到正确辨别所需的感知证据的积累(隔离点)。我们还观察到,具有不同情感内容的表情以不同的速度提供累积证据,而“恐惧”是跨感觉的最快隔离点的表情。最后,面部和语音信号在时间上的反应混淆模式之间缺乏相关性表明,从两个信号中提取的判别特征之间存在不同的关系。总之,这些结果全面展示了与不同情绪表达相关的听觉、视觉和视听信息如何随时间累积,强调了在可用信息最少的情况下,多感官环境如何能够加快辨别过程。

相似文献

1
Time-resolved discrimination of audio-visual emotion expressions.时分辨别视听情绪表达。
Cortex. 2019 Oct;119:184-194. doi: 10.1016/j.cortex.2019.04.017. Epub 2019 May 6.
2
Selective Impairment of Basic Emotion Recognition in People with Autism: Discrimination Thresholds for Recognition of Facial Expressions of Varying Intensities.自闭症患者基本情绪识别的选择性损伤:识别不同强度面部表情的辨别阈值。
J Autism Dev Disord. 2018 Jun;48(6):1886-1894. doi: 10.1007/s10803-017-3428-2.
3
Multilevel alterations in the processing of audio-visual emotion expressions in autism spectrum disorders.自闭症谱系障碍中视听情绪表达加工的多层次改变。
Neuropsychologia. 2013 Apr;51(5):1002-10. doi: 10.1016/j.neuropsychologia.2013.02.009. Epub 2013 Feb 24.
4
The representation and plasticity of body emotion expression.身体情绪表达的表现形式和可塑性。
Psychol Res. 2020 Jul;84(5):1400-1406. doi: 10.1007/s00426-018-1133-1. Epub 2019 Jan 2.
5
Hierarchical Brain Network for Face and Voice Integration of Emotion Expression.情绪表达的面孔和声音整合的分层脑网络。
Cereb Cortex. 2019 Aug 14;29(9):3590-3605. doi: 10.1093/cercor/bhy240.
6
The asynchronous influence of facial expressions on bodily expressions.面部表情对身体表情的异步影响。
Acta Psychol (Amst). 2019 Sep;200:102941. doi: 10.1016/j.actpsy.2019.102941. Epub 2019 Oct 31.
7
The importance of stimulus variability when studying face processing using fast periodic visual stimulation: A novel 'mixed-emotions' paradigm.使用快速周期性视觉刺激研究面部处理时刺激变异性的重要性:一种新颖的“混合情绪”范式。
Cortex. 2019 Aug;117:182-195. doi: 10.1016/j.cortex.2019.03.006. Epub 2019 Mar 19.
8
Hormonal and modality specific effects on males' emotion recognition ability.男性情绪识别能力的激素和模态特异性影响。
Psychoneuroendocrinology. 2020 Sep;119:104719. doi: 10.1016/j.psyneuen.2020.104719. Epub 2020 Jun 2.
9
Loneliness and the recognition of vocal socioemotional expressions in adolescence.孤独感与青少年对声音社会情感表达的识别。
Cogn Emot. 2020 Aug;34(5):970-976. doi: 10.1080/02699931.2019.1682971. Epub 2019 Oct 25.
10
New tests to measure individual differences in matching and labelling facial expressions of emotion, and their association with ability to recognise vocal emotions and facial identity.用于测量个体在匹配和标注情绪面部表情方面差异的新测试,以及这些差异与识别声音情绪和面部身份能力的关系。
PLoS One. 2013 Jun 28;8(6):e68126. doi: 10.1371/journal.pone.0068126. Print 2013.

引用本文的文献

1
Validation of the Emotionally Congruent and Incongruent Face-Body Static Set (ECIFBSS).情绪一致与不一致的面部-身体静态组(ECIFBSS)的验证。
Behav Res Methods. 2025 Jan 3;57(1):41. doi: 10.3758/s13428-024-02550-w.
2
Automatic Brain Categorization of Discrete Auditory Emotion Expressions.自动离散听觉情绪表达的大脑分类。
Brain Topogr. 2023 Nov;36(6):854-869. doi: 10.1007/s10548-023-00983-8. Epub 2023 Aug 28.
3
The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG.情感之声:通过频率标记脑电图精确定位情感语音处理
Brain Sci. 2023 Jan 18;13(2):162. doi: 10.3390/brainsci13020162.