IDLab-Department of Computer Science, University of Antwerp-imec, Sint-Pietersvliet 7, 2000 Antwerp, Belgium.
WAVES Research Group, Department of Information Technology, Ghent University, 4 Technologiepark 126, Zwijnaarde, 9052 Ghent, Belgium.
Sensors (Basel). 2023 Dec 3;23(23):9588. doi: 10.3390/s23239588.
Within the broader context of improving interactions between artificial intelligence and humans, the question has arisen regarding whether auditory and rhythmic support could increase attention for visual stimuli that do not stand out clearly from an information stream. To this end, we designed an experiment inspired by pip-and-pop but more appropriate for eliciting attention and P3a-event-related potentials (ERPs). In this study, the aim was to distinguish between targets and distractors based on the subject's electroencephalography (EEG) data. We achieved this objective by employing different machine learning (ML) methods for both individual-subject (IS) and cross-subject (CS) models. Finally, we investigated which EEG channels and time points were used by the model to make its predictions using saliency maps. We were able to successfully perform the aforementioned classification task for both the IS and CS scenarios, reaching classification accuracies up to 76%. In accordance with the literature, the model primarily used the parietal-occipital electrodes between 200 ms and 300 ms after the stimulus to make its prediction. The findings from this research contribute to the development of more effective P300-based brain-computer interfaces. Furthermore, they validate the EEG data collected in our experiment.
在提高人工智能和人类之间交互的更广泛背景下,出现了这样一个问题,即听觉和节奏支持是否可以提高对信息流中不明显的视觉刺激的注意力。为此,我们设计了一项实验,灵感来自于 pip-and-pop,但更适合引起注意和 P3a 事件相关电位 (ERP)。在这项研究中,我们的目标是根据主体的脑电图 (EEG) 数据区分目标和干扰项。我们通过使用个体受试者 (IS) 和跨受试者 (CS) 模型的不同机器学习 (ML) 方法来实现这一目标。最后,我们使用显著图研究了模型使用哪些 EEG 通道和时间点来进行预测。我们成功地为 IS 和 CS 两种情况执行了上述分类任务,达到了高达 76%的分类准确率。与文献一致,该模型主要使用刺激后 200 毫秒至 300 毫秒之间的顶枕电极进行预测。这项研究的结果有助于开发更有效的基于 P300 的脑机接口。此外,它们验证了我们实验中收集的 EEG 数据。