Vaina Lucia M, Calabro Finnegan J, Samal Abhisek, Rana Kunjan D, Mamashli Fahimeh, Khan Sheraz, Hämäläinen Matti, Ahlfors Seppo P, Ahveninen Jyrki
Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School-Department of Neurology, Massachusetts General Hospital and Brigham and Women's Hospital, MA, USA.
Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.
Brain Res. 2021 Aug 15;1765:147489. doi: 10.1016/j.brainres.2021.147489. Epub 2021 Apr 18.
Visual segregation of moving objects is a considerable computational challenge when the observer moves through space. Recent psychophysical studies suggest that directionally congruent, moving auditory cues can substantially improve parsing object motion in such settings, but the exact brain mechanisms and visual processing stages that mediate these effects are still incompletely known. Here, we utilized multivariate pattern analyses (MVPA) of MRI-informed magnetoencephalography (MEG) source estimates to examine how crossmodal auditory cues facilitate motion detection during the observer's self-motion. During MEG recordings, participants identified a target object that moved either forward or backward within a visual scene that included nine identically textured objects simulating forward observer translation. Auditory motion cues 1) improved the behavioral accuracy of target localization, 2) significantly modulated the MEG source activity in the areas V2 and human middle temporal complex (hMT+), and 3) increased the accuracy at which the target movement direction could be decoded from hMT+ activity using MVPA. The increase of decoding accuracy by auditory cues in hMT+ was significant also when superior temporal activations in or near auditory cortices were regressed out from the hMT+ source activity to control for source estimation biases caused by point spread. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow in the human extrastriate visual cortex can be facilitated by crossmodal influences from auditory system.
当观察者在空间中移动时,对移动物体进行视觉分离是一项相当大的计算挑战。最近的心理物理学研究表明,方向一致的移动听觉线索可以在这种情况下显著改善对物体运动的解析,但介导这些效应的确切脑机制和视觉处理阶段仍不完全清楚。在这里,我们利用基于MRI的脑磁图(MEG)源估计的多变量模式分析(MVPA)来研究跨模态听觉线索如何在观察者自我运动期间促进运动检测。在MEG记录过程中,参与者在一个视觉场景中识别一个向前或向后移动的目标物体,该视觉场景包括九个纹理相同的物体,模拟观察者向前平移。听觉运动线索:1)提高了目标定位的行为准确性;2)显著调节了V2区和人类颞中复合体(hMT+)的MEG源活动;3)提高了使用MVPA从hMT+活动中解码目标运动方向的准确性。当从hMT+源活动中回归听觉皮层或其附近的颞上激活以控制由点扩散引起的源估计偏差时,hMT+中听觉线索引起的解码准确性提高也很显著。综上所述,这些结果表明,来自听觉系统的跨模态影响可以促进在人类纹外视觉皮层中从自我运动诱导的视流中解析物体运动。