Suppr超能文献

跨模态整合增强了视听面部感知中与任务相关特征的神经表征。

Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

作者信息

Li Yuanqing, Long Jinyi, Huang Biao, Yu Tianyou, Wu Wei, Liu Yongjian, Liang Changhong, Sun Pei

机构信息

Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou 510640, China.

Department of Radiology, Guangdong General Hospital, Guangzhou 510080, China.

出版信息

Cereb Cortex. 2015 Feb;25(2):384-95. doi: 10.1093/cercor/bht228. Epub 2013 Aug 26.

Abstract

Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration.

摘要

先前的研究表明,视听整合可提高识别性能,并增强异模态脑区(例如后颞上沟/颞中回,即pSTS/MTG)的神经活动。此外,研究还表明,注意力在跨模态整合中起着重要作用。在本研究中,我们考虑了视听面部感知中的跨模态整合,并探讨了其对特征神经表征的影响。实验中的视听刺激由面部电影片段组成,这些片段可分为2种性别类别(男性与女性)或2种情绪类别(哭泣与大笑)。仅视觉/听觉刺激是通过去除这些电影片段的听觉/视觉内容而创建的。在记录功能磁共振成像(fMRI)信号时,受试者需要对视听、仅视觉或仅听觉刺激条件下的每个电影片段的性别/情绪类别做出判断。使用多变量模式分析方法从fMRI数据中获得的解码准确率和与脑模式相关的可重复性指标来评估性别/情绪特征的神经表征。与仅视觉和仅听觉刺激条件相比,我们发现视听整合增强了与任务相关特征的神经表征,并且特征选择性注意力可能在视听整合中起调节作用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验