The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA.
Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA.
Autism Res. 2024 Feb;17(2):280-310. doi: 10.1002/aur.3104. Epub 2024 Feb 9.
Autistic individuals show substantially reduced benefit from observing visual articulations during audiovisual speech perception, a multisensory integration deficit that is particularly relevant to social communication. This has mostly been studied using simple syllabic or word-level stimuli and it remains unclear how altered lower-level multisensory integration translates to the processing of more complex natural multisensory stimulus environments in autism. Here, functional neuroimaging was used to examine neural correlates of audiovisual gain (AV-gain) in 41 autistic individuals to those of 41 age-matched non-autistic controls when presented with a complex audiovisual narrative. Participants were presented with continuous narration of a story in auditory-alone, visual-alone, and both synchronous and asynchronous audiovisual speech conditions. We hypothesized that previously identified differences in audiovisual speech processing in autism would be characterized by activation differences in brain regions well known to be associated with audiovisual enhancement in neurotypicals. However, our results did not provide evidence for altered processing of auditory alone, visual alone, audiovisual conditions or AV- gain in regions associated with the respective task when comparing activation patterns between groups. Instead, we found that autistic individuals responded with higher activations in mostly frontal regions where the activation to the experimental conditions was below baseline (de-activations) in the control group. These frontal effects were observed in both unisensory and audiovisual conditions, suggesting that these altered activations were not specific to multisensory processing but reflective of more general mechanisms such as an altered disengagement of Default Mode Network processes during the observation of the language stimulus across conditions.
自闭症个体在视听语音感知中观察视觉发音时的收益明显减少,这是一种多感官整合缺陷,与社交沟通特别相关。这主要是通过使用简单的音节或单词级刺激来研究的,目前尚不清楚改变较低水平的多感官整合如何转化为自闭症中更复杂的自然多感官刺激环境的处理。在这里,使用功能神经影像学研究了 41 名自闭症个体和 41 名年龄匹配的非自闭症对照组在呈现复杂视听叙事时视听增益(AV-gain)的神经相关性。参与者在听觉、视觉和视听语音同步和异步条件下接受故事的连续讲述。我们假设,在自闭症中,以前确定的视听语音处理差异的特征是大脑区域的激活差异,这些区域与神经典型人群的视听增强密切相关。然而,我们的结果并没有提供证据表明在比较组间激活模式时,在与各自任务相关的区域中,听觉、视觉或视听条件或 AV-增益的处理发生了改变。相反,我们发现自闭症个体在额前区域表现出更高的激活,而对照组在实验条件下的激活低于基线(去激活)。这些额前效应在单感官和视听条件下都有观察到,这表明这些改变的激活不是多感官处理所特有的,而是反映了更一般的机制,例如在观察语言刺激时默认模式网络过程的改变脱离。