Abbatecola Clement, Gerardin Peggy, Beneyton Kim, Kennedy Henry, Knoblauch Kenneth
Univ Lyon, Université Claude Bernard Lyon 1, INSERM, Stem Cell and Brain Research Institute U1208, Bron, France.
Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom.
Front Syst Neurosci. 2021 May 28;15:669256. doi: 10.3389/fnsys.2021.669256. eCollection 2021.
Cross-modal effects provide a model framework for investigating hierarchical inter-areal processing, particularly, under conditions where unimodal cortical areas receive contextual feedback from other modalities. Here, using complementary behavioral and brain imaging techniques, we investigated the functional networks participating in face and voice processing during gender perception, a high-level feature of voice and face perception. Within the framework of a signal detection decision model, Maximum likelihood conjoint measurement (MLCM) was used to estimate the contributions of the face and voice to gender comparisons between pairs of audio-visual stimuli in which the face and voice were independently modulated. Top-down contributions were varied by instructing participants to make judgments based on the gender of either the face, the voice or both modalities ( = 12 for each task). Estimated face and voice contributions to the judgments of the stimulus pairs were not independent; both contributed to all tasks, but their respective weights varied over a 40-fold range due to top-down influences. Models that best described the modal contributions required the inclusion of two different top-down interactions: (i) an interaction that depended on gender congruence across modalities (i.e., difference between face and voice modalities for each stimulus); (ii) an interaction that depended on the within modalities' gender magnitude. The significance of these interactions was task dependent. Specifically, gender congruence interaction was significant for the face and voice tasks while the gender magnitude interaction was significant for the face and stimulus tasks. Subsequently, we used the same stimuli and related tasks in a functional magnetic resonance imaging (fMRI) paradigm ( = 12) to explore the neural correlates of these perceptual processes, analyzed with Dynamic Causal Modeling (DCM) and Bayesian Model Selection. Results revealed changes in effective connectivity between the unimodal Fusiform Face Area (FFA) and Temporal Voice Area (TVA) in a fashion that paralleled the face and voice behavioral interactions observed in the psychophysical data. These findings explore the role in perception of multiple unimodal parallel feedback pathways.
跨模态效应为研究层级区域间加工提供了一个模型框架,特别是在单模态皮层区域从其他模态接收情境反馈的条件下。在此,我们使用互补的行为和脑成像技术,研究了在性别感知(语音和面部感知的高级特征)过程中参与面部和语音加工的功能网络。在信号检测决策模型的框架内,使用最大似然联合测量(MLCM)来估计面部和语音对视听刺激对之间性别比较的贡献,其中面部和语音是独立调制的。通过指示参与者根据面部、语音或两种模态的性别进行判断,来改变自上而下的贡献(每个任务(n = 12))。估计的面部和语音对刺激对判断的贡献并非相互独立;两者都对所有任务有贡献,但由于自上而下的影响,它们各自的权重在40倍的范围内变化。最能描述模态贡献的模型需要包含两种不同的自上而下的相互作用:(i)一种依赖于跨模态性别一致性的相互作用(即每个刺激的面部和语音模态之间的差异);(ii)一种依赖于模态内性别量级的相互作用。这些相互作用的显著性取决于任务。具体而言,性别一致性相互作用在面部和语音任务中显著,而性别量级相互作用在面部和刺激任务中显著。随后,我们在功能磁共振成像(fMRI)范式((n = 12))中使用相同的刺激和相关任务,以探索这些感知过程的神经关联,并通过动态因果模型(DCM)和贝叶斯模型选择进行分析。结果显示,单模态梭状面孔区(FFA)和颞叶语音区(TVA)之间的有效连接性发生了变化,其方式与在心理物理学数据中观察到的面部和语音行为相互作用平行。这些发现探讨了多个单模态平行反馈通路在感知中的作用。