School of Psychology, Vita-Salute San Raffaele University, Milan, Italy.
Experimental Psychology Unit, Division of Neuroscience, IRCCS San Raffaele, Milan, Italy.
J Neural Eng. 2021 Mar 8;18(3). doi: 10.1088/1741-2552/ab8e8f.
. We have recently developed a prototype of a novel human-computer interface for assistive communication based on voluntary shifts of attention (gaze) from a far target to a near target associated with a decrease of pupil size (Pupillary Accommodative Response, PAR), an automatic vegetative response that can be easily recorded. We report here an extension of that approach based on pupillary and cortical frequency tagging.. In 18 healthy volunteers, we investigated the possibility of decoding attention shifts in depth by exploiting the evoked oscillatory responses of the pupil (Pupillary Oscillatory Response, POR, recorded through a low-cost device) and visual cortex (Steady-State Visual Evoked Potentials, SSVEP, recorded from 4 scalp electrodes). With a simple binary communication protocol (focusing on a far target meaning 'No', focusing on the near target meaning 'Yes'), we aimed at discriminating when observer's overt attention (gaze) shifted from the far to the near target, which were flickering at different frequencies.. By applying a binary linear classifier (Support Vector Machine, SVM, with leave-one-out cross validation) to POR and SSVEP signals, we found that, with only twenty trials and no subjects' behavioural training, the offline median decoding accuracy was 75% and 80% with POR and SSVEP signals, respectively. When the two signals were combined together, accuracy reached 83%. The number of observers for whom accuracy was higher than 70% was 11/18, 12/18 and 14/18 with POR, SVVEP and combined features, respectively. A signal detection analysis confirmed these results.. The present findings suggest that exploiting frequency tagging with pupillary or cortical responses during an attention shift in the depth plane, either separately or combined together, is a promising approach to realize a device for communicating with Complete Locked-In Syndrome (CLIS) patients when oculomotor control is unreliable and traditional assistive communication, even based on PAR, is unsuccessful.
我们最近开发了一种基于自愿将注意力(注视)从远目标转移到近目标并伴有瞳孔缩小(瞳孔适应性反应,PAR)的新型人机界面原型,用于辅助交流,这是一种可以轻松记录的自动植物反应。我们在此报告该方法的扩展,该方法基于瞳孔和皮质频率标记。在 18 名健康志愿者中,我们通过利用瞳孔(通过低成本设备记录的瞳孔振荡反应,POR)和视觉皮层(从 4 个头皮电极记录的稳态视觉诱发电位,SSVEP)的诱发振荡反应,研究了通过深度解码注意力转移的可能性。使用简单的二进制通信协议(专注于远目标表示“否”,专注于近目标表示“是”),我们旨在区分观察者的显性注意力(注视)何时从远目标转移到近目标,这些目标以不同的频率闪烁。通过将二进制线性分类器(支持向量机,SVM,采用留一交叉验证)应用于 POR 和 SSVEP 信号,我们发现,仅使用二十次试验并且没有对受试者进行行为训练,离线中位数解码精度分别为 POR 和 SSVEP 信号的 75%和 80%。当将两个信号结合在一起时,准确性达到 83%。POR、SSVEP 和组合特征的准确率高于 70%的观察者人数分别为 11/18、12/18 和 14/18。信号检测分析证实了这些结果。本研究结果表明,在深度平面中的注视转移期间,利用瞳孔或皮质反应进行频率标记,无论是单独使用还是组合使用,都是一种很有前途的方法,可以实现与完全闭锁综合征(CLIS)患者进行通信的设备,因为眼球运动控制不可靠,传统的辅助交流甚至基于 PAR 也不成功。