Suppr超能文献

从面孔和声音对人类性别进行跨模态处理的神经网络:一项 fMRI 研究。

The neural network sustaining the crossmodal processing of human gender from faces and voices: an fMRI study.

机构信息

Université catholique de Louvain, Faculté de Psychologie et des Sciences de l'Education-IPSY/NEUROCS, Louvain-la-Neuve, Belgium.

出版信息

Neuroimage. 2011 Jan 15;54(2):1654-61. doi: 10.1016/j.neuroimage.2010.08.073. Epub 2010 Sep 9.

Abstract

The aim of this fMRI study was to investigate the cerebral crossmodal interactions between human faces and voices during a gender categorization task. Twelve healthy male participants took part to the study. They were scanned in 4 runs that contained 3 conditions consisting in the presentation of faces, voices or congruent face-voice pairs. The task consisted in categorizing each trial (visual, auditory or associations) according to its gender (male or female). The subtraction between the bimodal condition and the sum of the unimodal ones showed that categorizing face/voice associations according to their gender produced unimodal activations of the visual (right calcarine sulcus) and auditory regions (bilateral superior temporal gyri), and specific supramodal activations of the left superior parietal gyrus and the right inferior frontal gyrus. Moreover, psychophysiological interaction analyses (PPI) revealed that both unimodal regions were inter-connected and connected to the prefrontal gyrus and the putamen, and that the left parietal gyrus had an enhanced connectivity with a parieto-premotor circuit involved in the crossmodal control of attention. This fMRI study showed that the crossmodal auditory-visual categorization of human gender is sustained by a network of cerebral regions highly similar to those observed in our previous studies examining the crossmodal interactions involved in face/voice recognition (Joassin et al., 2010). This suggests that the crossmodal processing of human stimuli requires the activation of a network of cortical regions, including both unimodal visual and auditory regions and supramodal parietal and frontal regions involved in the integration of both faces and voices and in the crossmodal attentional processes, and activated independently from the task to perform or the cognitive level of processing.

摘要

这项 fMRI 研究旨在探究人类在进行性别分类任务时,面部和声音之间的跨模态大脑相互作用。12 名健康男性参与了研究。他们在 4 个回合中进行了扫描,其中包含 3 种条件,分别为呈现面部、声音或面部-声音匹配对。任务要求根据性别(男性或女性)对每个试验(视觉、听觉或关联)进行分类。双模态条件与单模态条件之和的差值表明,根据性别对人脸/声音关联进行分类会产生视觉(右侧距状沟)和听觉区域(双侧颞上回)的单模态激活,以及左顶叶上回和右额下回的特定超模态激活。此外,心理生理交互分析(PPI)显示,两个单模态区域相互连接,并与前额叶和壳核相连,而左顶叶与参与跨模态注意力控制的顶-运动前回路的连接增强。这项 fMRI 研究表明,人类性别听觉-视觉的跨模态分类由一个大脑区域网络维持,该网络与我们之前研究中观察到的涉及面孔/声音识别的跨模态相互作用的网络高度相似(Joassin 等人,2010 年)。这表明,人类刺激的跨模态处理需要激活一个包括视觉和听觉单模态区域以及参与面部和声音整合以及跨模态注意力处理的顶叶和额叶超模态区域的网络,并且独立于要执行的任务或处理的认知水平而激活。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验