Neuroimaging Laboratory, IRCCS, Santa Lucia Foundation, Via Ardeatina 306, Rome 00179, Italy.
Neuroimage. 2013 Feb 15;67:213-26. doi: 10.1016/j.neuroimage.2012.11.031. Epub 2012 Nov 29.
The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified "sensory" networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom-up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches.
使用自然、生态有效的刺激物来研究大脑活动,正成为神经科学研究的一个重要挑战。已经提出了几种方法,主要依赖于数据驱动的方法(例如独立成分分析,ICA)。然而,数据驱动的方法通常需要对成像结果进行一些事后解释,以便对潜在的感觉、运动或认知功能进行推断。在这里,我们提出使用生物上合理的计算模型来提取(多)感觉刺激统计信息,这些信息可用于标准的假设驱动分析(广义线性模型,GLM)。我们运行了两个独立的 fMRI 实验,这两个实验都涉及到被试观看电视剧的一集。在实验 1 中,我们通过在不同的间隔内开和关颜色、运动和/或声音来操纵呈现,而在实验 2 中,视频以原始版本播放,所有不同感觉特征的连续变化都保留不变。对于视觉和听觉,我们都提取了对应于低水平特征的空间和时间不连续性的刺激统计信息,以及与整体刺激显著性相关的综合度量。结果表明,枕叶视觉皮层和上颞听觉皮层的活动与低水平特征的变化共同变化。发现视觉显著性进一步促进了额外纹状体视觉皮层和后顶叶皮层的活动,而听觉显著性则增强了上颞叶皮层的活动。对相同数据集的基于数据的 ICA 分析也确定了包含视觉和听觉区域的“感觉”网络,但没有提供有关潜在过程的具体信息,例如,这些过程可能与模态、刺激特征和/或显著性有关。我们的结论是,计算模型与 GLM 的结合能够在观看复杂和动态多感觉刺激时,追踪自下而上的信号对大脑活动的影响,这超出了纯粹基于数据的方法的能力。