Suppr超能文献

功能磁共振成像中情绪表达的跨模态解码——跨时段与跨样本复制

Cross-modal decoding of emotional expressions in fMRI-Cross-session and cross-sample replication.

作者信息

Wallenwein Lara A, Schmidt Stephanie N L, Hass Joachim, Mier Daniela

机构信息

Department of Psychology, University of Konstanz, Konstanz, Germany.

Faculty of Applied Psychology, SRH University Heidelberg, Heidelberg, Germany.

出版信息

Imaging Neurosci (Camb). 2024 Sep 23;2. doi: 10.1162/imag_a_00289. eCollection 2024.

Abstract

The theory of embodied simulation suggests a common neuronal representation for action and perception in mirror neurons (MN) that allows an automatic understanding of another person's mental state. Multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data enables a joint investigation of the MN properties cross-modality and action specificity with high spatial sensitivity. In repeated-measures and independent samples, we measured BOLD-fMRI activation during a social-cognitive paradigm, which included the imitation, execution, and observation of a facial expression of fear or anger. Using support vector machines in a region of interest and a searchlight-based within-subject approach, we classified the emotional content first within modalities and subsequently across modalities. Of main interest were regions of the MN and the emotional face processing system. A two-step permutation scheme served to evaluate significance of classification accuracies. Additionally, we analyzed cross-session and cross-sample replicability. Classification of emotional content was significantly above chance within-modality in the execution and imitation condition with replication across sessions and across samples, but not in the observation condition. Cross-modal classification was possible when trained on the execution condition and tested on the imitation condition with cross-session replication. The searchlight analysis revealed additional areas exhibiting action specificity and cross-modality, mainly in the prefrontal cortex. We demonstrate replicability of brain regions with action specific and cross-modal representations of fear and anger for execution and imitation. Since we could not find a shared neural representation of emotions within the observation modality, our results only partially lend support to the embodied simulation theory. We conclude that activation in MN regions is less robust and less clearly distinguishable during observation than motor tasks.

摘要

具身模拟理论表明,镜像神经元(MN)中存在一种用于动作和感知的共同神经元表征,这使得人们能够自动理解他人的心理状态。功能磁共振成像(fMRI)数据的多变量模式分析(MVPA)能够以高空间灵敏度对MN特性的跨模态和动作特异性进行联合研究。在重复测量和独立样本中,我们在一种社会认知范式下测量了BOLD-fMRI激活情况,该范式包括对恐惧或愤怒面部表情的模仿、执行和观察。我们在感兴趣区域使用支持向量机,并采用基于搜索光的受试者内方法,首先在模态内,然后跨模态对情绪内容进行分类。主要关注的是MN区域和情绪面部处理系统。采用两步置换方案来评估分类准确率的显著性。此外,我们还分析了跨会话和跨样本的可重复性。在执行和模仿条件下,情绪内容的分类在模态内显著高于随机水平,且在跨会话和跨样本中具有重复性,但在观察条件下并非如此。当在执行条件下进行训练并在模仿条件下进行测试时,跨模态分类是可行的,且具有跨会话重复性。搜索光分析揭示了其他表现出动作特异性和跨模态的区域,主要位于前额叶皮层。我们证明了在执行和模仿中,恐惧和愤怒的动作特异性和跨模态表征的脑区具有可重复性。由于我们在观察模态中未发现情绪的共享神经表征,因此我们的结果仅部分支持具身模拟理论。我们得出结论,与运动任务相比,在观察过程中MN区域的激活不太稳健,也不太容易区分。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72b2/12290836/40a02af80fd2/imag_a_00289_fig1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验