Department of Biomedical Engineering, Columbia University, New York, NY, United States of America.
Department of Radiology , Columbia University Irving Medical Center, New York, NY 10032, United States of America.
J Neural Eng. 2022 Jan 6;18(6). doi: 10.1088/1741-2552/ac4593.
Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm.Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events.In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition.We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.
重定向是人类将注意力导向环境中不同刺激的核心。先前的研究通常采用受控制的范式,限制眼睛和头部运动,以研究注意力重定向的神经和生理过程。在这里,我们旨在使用自然的虚拟现实 (VR) 基于目标检测范式更好地理解注视和注意力重定向之间的关系。
受试者在城市中穿梭,并被指示计算出现在街道上的目标数量。受试者在没有头部运动的固定条件下和允许头部运动的自由条件下执行任务。记录了脑电图 (EEG)、注视和瞳孔数据。为了研究神经和生理重定向信号如何分布在不同的注视事件中,我们使用层次判别成分分析 (HDCA) 来识别基于 EEG 和瞳孔的判别成分。混合效应广义线性模型 (GLM) 用于确定这些判别成分与不同注视事件时间之间的相关性。HDCA 还用于结合 EEG、瞳孔和停留时间信号来对重定向事件进行分类。
在 EEG 和瞳孔中,停留时间对重定向信号的贡献最大。然而,当将停留时间与其他注视事件正交化时,重定向信号的分布在两种模态中是不同的,EEG 重定向信号领先于瞳孔重定向信号。我们还发现,集成 EEG、瞳孔和停留时间特征的混合分类器可以在固定(AUC=0.79)和自由(AUC=0.77)条件下检测到重定向信号。
我们表明,当受试者沉浸在 VR 中时,神经和眼球重定向信号在注视事件中分布不同,但仍然可以被捕获并整合以分类人类受试者定向的目标与干扰物对象。