• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

针对情感性非言语发声的声学特性的脑磁图诱发反应的单受试者分析。

Single-subject analyses of magnetoencephalographic evoked responses to the acoustic properties of affective non-verbal vocalizations.

作者信息

Salvia Emilie, Bestelmeyer Patricia E G, Kotz Sonja A, Rousselet Guillaume A, Pernet Cyril R, Gross Joachim, Belin Pascal

机构信息

Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow Glasgow, UK.

Bangor Imaging Unit, School of Psychology, Bangor University Gwynedd, UK.

出版信息

Front Neurosci. 2014 Dec 22;8:422. doi: 10.3389/fnins.2014.00422. eCollection 2014.

DOI:10.3389/fnins.2014.00422
PMID:25565951
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC4273656/
Abstract

Magneto-encephalography (MEG) was used to examine the cerebral response to affective non-verbal vocalizations (ANVs) at the single-subject level. Stimuli consisted of non-verbal affect bursts from the Montreal Affective Voices morphed to parametrically vary acoustical structure and perceived emotional properties. Scalp magnetic fields were recorded in three participants while they performed a 3-alternative forced choice emotion categorization task (Anger, Fear, Pleasure). Each participant performed more than 6000 trials to allow single-subject level statistical analyses using a new toolbox which implements the general linear model (GLM) on stimulus-specific responses (LIMO-EEG). For each participant we estimated "simple" models [including just one affective regressor (Arousal or Valence)] as well as "combined" models (including acoustical regressors). Results from the "simple" models revealed in every participant the significant early effects (as early as ~100 ms after onset) of Valence and Arousal already reported at the group-level in previous work. However, the "combined" models showed that few effects of Arousal remained after removing the acoustically-explained variance, whereas significant effects of Valence remained especially at late stages. This study demonstrates (i) that single-subject analyses replicate the results observed at early stages by group-level studies and (ii) the feasibility of GLM-based analysis of MEG data. It also suggests that early modulation of MEG amplitude by affective stimuli partly reflects their acoustical properties.

摘要

采用脑磁图(MEG)在单受试者水平上检测大脑对情感性非言语发声(ANV)的反应。刺激由蒙特利尔情感语音中的非言语情感爆发组成,这些爆发经过变形以参数方式改变声学结构和感知到的情感属性。在三名参与者执行三选一强制选择情绪分类任务(愤怒、恐惧、愉悦)时记录头皮磁场。每个参与者进行了超过6000次试验,以便使用一个新的工具箱在单受试者水平上进行统计分析,该工具箱在特定刺激反应上实现了通用线性模型(GLM)(LIMO-EEG)。对于每个参与者,我们估计了“简单”模型[仅包括一个情感回归变量(唤醒或效价)]以及“组合”模型(包括声学回归变量)。“简单”模型的结果显示,在每个参与者中都出现了先前在群体水平研究中已经报道的效价和唤醒的显著早期效应(最早在刺激开始后约100毫秒)。然而,“组合”模型表明,在去除声学解释的方差后,唤醒的效应所剩无几,而效价的显著效应仍然存在,尤其是在后期阶段。这项研究表明:(i)单受试者分析重复了群体水平研究在早期阶段观察到的结果;(ii)基于GLM分析MEG数据的可行性。它还表明,情感刺激对MEG振幅的早期调制部分反映了它们的声学特性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00a8/4273656/35d9eb753acb/fnins-08-00422-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00a8/4273656/1ecf16193a15/fnins-08-00422-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00a8/4273656/7c186d29094d/fnins-08-00422-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00a8/4273656/9e3350008b61/fnins-08-00422-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00a8/4273656/faada33f0fc9/fnins-08-00422-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00a8/4273656/35d9eb753acb/fnins-08-00422-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00a8/4273656/1ecf16193a15/fnins-08-00422-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00a8/4273656/7c186d29094d/fnins-08-00422-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00a8/4273656/9e3350008b61/fnins-08-00422-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00a8/4273656/faada33f0fc9/fnins-08-00422-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00a8/4273656/35d9eb753acb/fnins-08-00422-g0005.jpg

相似文献

1
Single-subject analyses of magnetoencephalographic evoked responses to the acoustic properties of affective non-verbal vocalizations.针对情感性非言语发声的声学特性的脑磁图诱发反应的单受试者分析。
Front Neurosci. 2014 Dec 22;8:422. doi: 10.3389/fnins.2014.00422. eCollection 2014.
2
The "Musical Emotional Bursts": a validated set of musical affect bursts to investigate auditory affective processing.“音乐情绪迸发”:一套经验证的音乐情感迸发数据集,用于研究听觉情感处理。
Front Psychol. 2013 Aug 13;4:509. doi: 10.3389/fpsyg.2013.00509. eCollection 2013.
3
Decoding auditory-evoked response in affective states using wearable around-ear EEG system.使用可穿戴环绕式耳戴脑电图系统解码情感状态下的听觉诱发反应。
Biomed Phys Eng Express. 2023 Aug 25;9(5). doi: 10.1088/2057-1976/acf137.
4
The Montreal Affective Voices: a validated set of nonverbal affect bursts for research on auditory affective processing.蒙特利尔情感声音集:一组经验证的非言语情感爆发声音,用于听觉情感处理研究。
Behav Res Methods. 2008 May;40(2):531-9. doi: 10.3758/brm.40.2.531.
5
The perception of caricatured emotion in voice.对声音中夸张情感的感知。
Cognition. 2020 Jul;200:104249. doi: 10.1016/j.cognition.2020.104249. Epub 2020 May 12.
6
Cross-cultural differences in the processing of non-verbal affective vocalizations by Japanese and canadian listeners.日本和加拿大听众对非言语情感发声的跨文化处理差异。
Front Psychol. 2013 Mar 19;4:105. doi: 10.3389/fpsyg.2013.00105. eCollection 2013.
7
Automatic Brain Categorization of Discrete Auditory Emotion Expressions.自动离散听觉情绪表达的大脑分类。
Brain Topogr. 2023 Nov;36(6):854-869. doi: 10.1007/s10548-023-00983-8. Epub 2023 Aug 28.
8
The representational dynamics of perceived voice emotions evolve from categories to dimensions.感知声音情绪的表象动态从类别演变为维度。
Nat Hum Behav. 2021 Sep;5(9):1203-1213. doi: 10.1038/s41562-021-01073-0. Epub 2021 Mar 11.
9
CNEV: A corpus of Chinese nonverbal emotional vocalizations with a database of emotion category, valence, arousal, and gender.CNEV:一个包含情感类别、效价、唤醒度和性别的数据库的中文非语言情感发声语料库。
Behav Res Methods. 2025 Jan 21;57(2):62. doi: 10.3758/s13428-024-02595-x.
10
Emotional authenticity modulates affective and social trait inferences from voices.情绪真实性调节声音的情感和社会特质推断。
Philos Trans R Soc Lond B Biol Sci. 2021 Dec 20;376(1840):20200402. doi: 10.1098/rstb.2020.0402. Epub 2021 Nov 1.

引用本文的文献

1
Improving the Eligibility of Task-Based fMRI Studies for Meta-Analysis: A Review and Reporting Recommendations.提高基于任务的 fMRI 研究在元分析中的适用性:综述与报告建议。
Neuroinformatics. 2024 Jan;22(1):5-22. doi: 10.1007/s12021-023-09643-5. Epub 2023 Nov 4.
2
Automatic Brain Categorization of Discrete Auditory Emotion Expressions.自动离散听觉情绪表达的大脑分类。
Brain Topogr. 2023 Nov;36(6):854-869. doi: 10.1007/s10548-023-00983-8. Epub 2023 Aug 28.
3
The perception of caricatured emotion in voice.对声音中夸张情感的感知。

本文引用的文献

1
Adaptation to vocal expressions reveals multistep perception of auditory emotion.对语音表达的适应揭示了听觉情绪的多步骤感知。
J Neurosci. 2014 Jun 11;34(24):8098-105. doi: 10.1523/JNEUROSCI.4820-13.2014.
2
On the role of crossmodal prediction in audiovisual emotion perception.论跨模态预测在视听情绪感知中的作用。
Front Hum Neurosci. 2013 Jul 18;7:369. doi: 10.3389/fnhum.2013.00369. eCollection 2013.
3
Early ERPs to faces and objects are driven by phase, not amplitude spectrum information: evidence from parametric, test-retest, single-subject analyses.
Cognition. 2020 Jul;200:104249. doi: 10.1016/j.cognition.2020.104249. Epub 2020 May 12.
4
Soundgen: An open-source tool for synthesizing nonverbal vocalizations.Soundgen:一个用于合成非言语发声的开源工具。
Behav Res Methods. 2019 Apr;51(2):778-792. doi: 10.3758/s13428-018-1095-7.
5
Converging evidence for [coronal] underspecification in English-speaking adults.英语母语成年人中[冠状]特征描述不足的汇聚证据。
J Neurolinguistics. 2017 Nov;44:147-162. doi: 10.1016/j.jneuroling.2017.05.003. Epub 2017 May 29.
早期对面孔和物体的事件相关电位由相位而非振幅谱信息驱动:来自参数化、重测、单受试者分析的证据。
J Vis. 2012 Dec 14;12(13):12. doi: 10.1167/12.13.12.
4
Vocal emotions influence verbal memory: neural correlates and interindividual differences.声音情绪影响言语记忆:神经关联和个体差异。
Cogn Affect Behav Neurosci. 2013 Mar;13(1):80-93. doi: 10.3758/s13415-012-0132-8.
5
The early spatio-temporal correlates and task independence of cerebral voice processing studied with MEG.利用 MEG 研究大脑语音处理的早期时空相关性和任务独立性。
Cereb Cortex. 2013 Jun;23(6):1388-95. doi: 10.1093/cercor/bhs119. Epub 2012 May 17.
6
Emotional cues during simultaneous face and voice processing: electrophysiological insights.同时进行面部和声音处理时的情绪线索:电生理研究进展。
PLoS One. 2012;7(2):e31001. doi: 10.1371/journal.pone.0031001. Epub 2012 Feb 22.
7
Predicting vocal emotion expressions from the human brain.从人类大脑预测声音情感表达。
Hum Brain Mapp. 2013 Aug;34(8):1971-81. doi: 10.1002/hbm.22041. Epub 2012 Feb 27.
8
Understanding voice perception.理解语音感知。
Br J Psychol. 2011 Nov;102(4):711-25. doi: 10.1111/j.2044-8295.2011.02041.x. Epub 2011 Jun 7.
9
Modeling Single-Trial ERP Reveals Modulation of Bottom-Up Face Visual Processing by Top-Down Task Constraints (in Some Subjects).基于单试次事件相关电位的研究揭示了自上而下的任务约束对底-顶式面孔视觉加工的调制(在某些被试中)。
Front Psychol. 2011 Jun 23;2:137. doi: 10.3389/fpsyg.2011.00137. eCollection 2011.
10
The temporal dynamics of processing emotions from vocal, facial, and bodily expressions.从声音、面部和身体表情中处理情绪的时间动态。
Neuroimage. 2011 Sep 15;58(2):665-74. doi: 10.1016/j.neuroimage.2011.06.035. Epub 2011 Jun 22.