Mohsenzadeh Yalda, Mullin Caitlin, Lahner Benjamin, Cichy Radoslaw Martin, Oliva Aude
Computer Science and AI Lab., Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA.
Vision (Basel). 2019 Feb 10;3(1):8. doi: 10.3390/vision3010008.
To build a representation of what we see, the human brain recruits regions throughout the visual cortex in cascading sequence. Recently, an approach was proposed to evaluate the dynamics of visual perception in high spatiotemporal resolution at the scale of the whole brain. This method combined functional magnetic resonance imaging (fMRI) data with magnetoencephalography (MEG) data using representational similarity analysis and revealed a hierarchical progression from primary visual cortex through the dorsal and ventral streams. To assess the replicability of this method, we here present the results of a visual recognition neuro-imaging fusion experiment and compare them within and across experimental settings. We evaluated the reliability of this method by assessing the consistency of the results under similar test conditions, showing high agreement within participants. We then generalized these results to a separate group of individuals and visual input by comparing them to the fMRI-MEG fusion data of Cichy et al (2016), revealing a highly similar temporal progression recruiting both the dorsal and ventral streams. Together these results are a testament to the reproducibility of the fMRI-MEG fusion approach and allows for the interpretation of these spatiotemporal dynamic in a broader context.
为构建我们所看到事物的表征,人类大脑会按级联顺序调用整个视觉皮层中的区域。最近,有人提出了一种方法,可在全脑尺度上以高时空分辨率评估视觉感知的动态过程。该方法使用表征相似性分析将功能磁共振成像(fMRI)数据与脑磁图(MEG)数据相结合,并揭示了从初级视觉皮层经背侧和腹侧视觉通路的层级递进过程。为评估该方法的可重复性,我们在此展示一项视觉识别神经成像融合实验的结果,并在不同实验设置下对其进行比较。我们通过评估相似测试条件下结果的一致性来评估该方法的可靠性,结果显示参与者内部的一致性很高。然后,我们将这些结果推广到另一组个体和视觉输入中,通过将其与Cichy等人(2016年)的fMRI-MEG融合数据进行比较,揭示出在调用背侧和腹侧视觉通路时高度相似的时间进程。这些结果共同证明了fMRI-MEG融合方法的可重复性,并有助于在更广泛的背景下解释这些时空动态过程。