The Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 5B7, Canada.
Department of Education and Psychology, Freie Universität Berlin, 14195, Berlin-Dahlem, Germany.
Neuroimage. 2019 Jun;193:167-177. doi: 10.1016/j.neuroimage.2019.03.028. Epub 2019 Mar 15.
Human high-level visual cortex shows a distinction between animate and inanimate objects, as revealed by fMRI. Recent studies have shown that object animacy can similarly be decoded from MEG sensor patterns. Which object properties drive this decoding? Here, we disentangled the influence of perceptual and categorical object properties by presenting perceptually matched objects (e.g., snake and rope) that were nonetheless easily recognizable as being animate or inanimate. In a series of behavioral experiments, three aspects of perceptual dissimilarity of these objects were quantified: overall dissimilarity, outline dissimilarity, and texture dissimilarity. Neural dissimilarity of MEG sensor patterns was modeled using regression analysis, in which perceptual dissimilarity (from the behavioral experiments) and categorical dissimilarity served as predictors of neural dissimilarity. We found that perceptual dissimilarity was strongly reflected in MEG sensor patterns from 80 ms after stimulus onset, with separable contributions of outline and texture dissimilarity. Surprisingly, when controlling for perceptual dissimilarity, MEG patterns did not carry information about object category (animate vs inanimate) at any time point. Nearly identical results were found in a second MEG experiment that required basic-level object recognition. This is in contrast to results observed in fMRI using the same stimuli, task, and analysis approach: fMRI voxel patterns in object-selective cortex showed a highly reliable categorical distinction even when controlling for perceptual dissimilarity. These results suggest that MEG sensor patterns do not capture object animacy independently of perceptual differences between animate and inanimate objects.
人类高级视觉皮层在 fMRI 研究中显示出对生物和非生物物体的区分。最近的研究表明,MEG 传感器模式也可以对物体的生物性进行类似的解码。哪些物体属性驱动了这种解码?在这里,我们通过呈现在感知上匹配但很容易被识别为生物或非生物的物体(例如蛇和绳子),来分离感知和类别物体属性的影响。在一系列行为实验中,我们量化了这些物体的三个方面的感知差异:整体差异、轮廓差异和纹理差异。使用回归分析对 MEG 传感器模式的神经差异进行建模,其中感知差异(来自行为实验)和类别差异作为神经差异的预测因子。我们发现,感知差异在刺激开始后 80 毫秒的 MEG 传感器模式中得到了强烈反映,轮廓和纹理差异有可分离的贡献。令人惊讶的是,当控制感知差异时,MEG 模式在任何时间点都不携带关于物体类别的信息(生物与非生物)。在第二个需要基本水平物体识别的 MEG 实验中,几乎得到了相同的结果。这与使用相同刺激、任务和分析方法的 fMRI 结果形成对比:即使在控制感知差异的情况下,物体选择性皮层中的 fMRI 体素模式仍然显示出高度可靠的类别区分。这些结果表明,MEG 传感器模式不能独立于生物和非生物物体之间的感知差异来捕捉物体的生物性。