Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, P.O. Box 9104, 6500 HE Nijmegen, The Netherlands.
Neuroimage. 2013 Dec;83:1063-73. doi: 10.1016/j.neuroimage.2013.07.075. Epub 2013 Aug 6.
Visual processing is a complex task which is best investigated using sensitive multivariate analysis methods that can capture representation-specific brain activity over both time and space. In this study, we applied a multivariate decoding algorithm to MEG data of subjects engaged in passive viewing of images of faces, scenes, bodies and tools. We used reconstructed source-space time courses as input to the algorithm in order to localize brain regions involved in optimal image discrimination. Applying this method to the interval of 115 to 315 ms after stimulus onset, we show a focal localization of regression coefficients in the inferior occipital, middle occipital, and lingual gyrus that drive decoding of the different perceived image categories. Classifier accuracy was highest (over 90% correctly classified trials, compared to a chance level accuracy of 50%) when dissociating the perception of faces from perception of other object categories. Furthermore, we applied this method to each single time point to extract the temporal evolution of visual perception. This allowed for the detection of differences in visual category perception as early as 85 ms after stimulus onset. Furthermore, localizing the corresponding regression coefficients of each time point allowed us to capture the spatiotemporal dynamics of visual category perception. This revealed initial involvement of sources in the inferior occipital, inferior temporal and superior occipital gyrus. During sustained stimulation additional sources in the anterior inferior temporal gyrus and superior parietal gyrus became involved. We conclude that decoding of source-space MEG data provides a suitable method to investigate the spatiotemporal dynamics of ongoing cognitive processing.
视觉处理是一项复杂的任务,最好使用能够捕捉特定于表示的大脑活动在时间和空间上的敏感多元分析方法进行研究。在这项研究中,我们将多元解码算法应用于参与被动观看面孔、场景、身体和工具图像的受试者的 MEG 数据。我们使用重建的源空间时间过程作为算法的输入,以定位参与最佳图像区分的大脑区域。将这种方法应用于刺激后 115 到 315 毫秒的间隔,我们在枕下回、中回和舌回中发现了回归系数的焦点定位,这些回归系数驱动着不同感知图像类别的解码。当将面孔的感知与其他物体类别的感知区分开时,分类器的准确率最高(超过 90%的试验正确分类,而机会水平的准确率为 50%)。此外,我们将这种方法应用于每个单独的时间点,以提取视觉感知的时间演化。这使得能够在刺激后 85 毫秒检测到视觉类别感知的差异。此外,定位每个时间点的相应回归系数允许我们捕获视觉类别感知的时空动态。这揭示了源在枕下回、下颞叶和上枕叶中的初始参与。在持续刺激期间,前下颞叶和上顶叶中的额外源也参与其中。我们得出结论,解码源空间 MEG 数据提供了一种合适的方法来研究正在进行的认知处理的时空动态。