Medical Image Processing Laboratory, Center for Neuroprosthetics, Interschool Institute of Bioengineering, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech H4, 1202 Geneva, Switzerland.
Nissan International SA, La Pièce 12, 1180 Rolle, Switzerland.
J Neural Eng. 2021 Feb 26;18(2). doi: 10.1088/1741-2552/abdfb2.
In contrast to the classical visual brain-computer interface (BCI) paradigms, which adhere to a rigid trial structure and restricted user behavior, electroencephalogram (EEG)-based visual recognition decoding during our daily activities remains challenging. The objective of this study is to explore the feasibility of decoding the EEG signature of visual recognition in experimental conditions promoting our natural ocular behavior when interacting with our dynamic environment.In our experiment, subjects visually search for a target object among suddenly appearing objects in the environment while driving a car-simulator. Given that subjects exhibit an unconstrained overt visual behavior, we based our study on eye fixation-related potentials (EFRPs). We report on gaze behavior and single-trial EFRP decoding performance (fixations on visually similar target vs. non-target objects). In addition, we demonstrate the application of our approach in a closed-loop BCI setup.To identify the target out of four symbol types along a road segment, the BCI system integrated decoding probabilities of multiple EFRP and achieved the average online accuracy of 0.37 ± 0.06 (12 subjects), statistically significantly above the chance level. Using the acquired data, we performed a comparative study of classification algorithms (discriminating target vs. non-target) and feature spaces in a simulated online scenario. The EEG approaches yielded similar moderate performances of at most 0.6 AUC, yet statistically significantly above the chance level. In addition, the gaze duration (dwell time) appears to be an additional informative feature in this context.These results show that visual recognition of sudden events can be decoded during active driving. Therefore, this study lays a foundation for assistive and recommender systems based on the driver's brain signals.
与传统的视觉脑机接口(BCI)范式相比,这些范式坚持严格的试验结构和受限的用户行为,而在我们的日常活动中,基于脑电图(EEG)的视觉识别解码仍然具有挑战性。本研究的目的是探索在实验条件下解码视觉识别的 EEG 特征的可行性,这些实验条件促进了我们在与动态环境交互时的自然眼球行为。
在我们的实验中,被试在驾驶模拟汽车时,从环境中突然出现的物体中视觉搜索目标物体。鉴于被试表现出不受约束的显性视觉行为,我们的研究基于眼动相关电位(EFRP)。我们报告了注视行为和单次 EFRP 解码性能(对视觉相似目标与非目标的注视)。此外,我们还展示了我们的方法在闭环 BCI 设置中的应用。
为了在沿着道路的四个符号类型中识别目标,BCI 系统集成了多个 EFRP 的解码概率,并实现了 0.37 ± 0.06(12 名被试)的平均在线准确性,明显高于机会水平。利用获得的数据,我们在模拟在线场景中对分类算法(区分目标与非目标)和特征空间进行了比较研究。EEG 方法的性能相似,最高可达 0.6 AUC,但明显高于机会水平。此外,在这种情况下,注视持续时间(停留时间)似乎是一个额外的信息特征。
这些结果表明,在主动驾驶过程中可以解码对突发事件的视觉识别。因此,本研究为基于驾驶员脑信号的辅助和推荐系统奠定了基础。