Suppr超能文献

基于听觉P300的单通道无视觉辅助脑机接口设计

Design of auditory P300-based brain-computer interfaces with a single auditory channel and no visual support.

作者信息

Choi Yun-Joo, Kwon Oh-Sang, Kim Sung-Phil

机构信息

Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919 Korea.

出版信息

Cogn Neurodyn. 2023 Dec;17(6):1401-1416. doi: 10.1007/s11571-022-09901-3. Epub 2022 Nov 18.

Abstract

UNLABELLED

Non-invasive brain-computer interfaces (BCIs) based on an event-related potential (ERP) component, P300, elicited via the oddball paradigm, have been extensively developed to enable device control and communication. While most P300-based BCIs employ visual stimuli in the oddball paradigm, auditory P300-based BCIs also need to be developed for users with unreliable gaze control or limited visual processing. Specifically, auditory BCIs without additional visual support or multi-channel sound sources can broaden the application areas of BCIs. This study aimed to design optimal stimuli for auditory BCIs among artificial (e.g., beep) and natural (e.g., human voice and animal sounds) sounds in such circumstances. In addition, it aimed to investigate differences between auditory and visual stimulations for online P300-based BCIs. As a result, natural sounds led to both higher online BCI performance and larger differences in ERP amplitudes between the target and non-target compared to artificial sounds. However, no single type of sound offered the best performance for all subjects; rather, each subject indicated different preferences between the human voice and animal sound. In line with previous reports, visual stimuli yielded higher BCI performance (average 77.56%) than auditory counterparts (average 54.67%). In addition, spatiotemporal patterns of the differences in ERP amplitudes between target and non-target were more dynamic with visual stimuli than with auditory stimuli. The results suggest that selecting a natural auditory stimulus optimal for individual users as well as making differences in ERP amplitudes between target and non-target stimuli more dynamic may further improve auditory P300-based BCIs.

SUPPLEMENTARY INFORMATION

The online version contains supplementary material available at 10.1007/s11571-022-09901-3.

摘要

未标注

基于事件相关电位(ERP)成分P300、通过Oddball范式诱发的非侵入性脑机接口(BCI)已得到广泛开发,以实现设备控制和通信。虽然大多数基于P300的BCI在Oddball范式中采用视觉刺激,但对于注视控制不可靠或视觉处理能力有限的用户,也需要开发基于听觉P300的BCI。具体而言,无需额外视觉支持或多通道声源的听觉BCI可以拓宽BCI的应用领域。本研究旨在在此类情况下,在人工声音(如蜂鸣声)和自然声音(如人声和动物声音)中为听觉BCI设计最佳刺激。此外,它旨在研究基于在线P300的BCI中听觉和视觉刺激之间的差异。结果表明,与人工声音相比,自然声音导致更高的在线BCI性能以及目标与非目标之间ERP振幅的更大差异。然而,没有单一类型的声音对所有受试者都表现出最佳性能;相反,每个受试者在人声和动物声音之间表现出不同的偏好。与先前的报告一致,视觉刺激产生的BCI性能(平均77.56%)高于听觉刺激(平均54.67%)。此外,目标与非目标之间ERP振幅差异的时空模式在视觉刺激下比在听觉刺激下更具动态性。结果表明,选择适合个体用户的自然听觉刺激以及使目标与非目标刺激之间的ERP振幅差异更具动态性,可能会进一步改善基于听觉P300的BCI。

补充信息

在线版本包含可在10.1007/s11571-022-09901-3获取的补充材料。

相似文献

1
Design of auditory P300-based brain-computer interfaces with a single auditory channel and no visual support.
Cogn Neurodyn. 2023 Dec;17(6):1401-1416. doi: 10.1007/s11571-022-09901-3. Epub 2022 Nov 18.
2
Effects of Emotional Stimulations on the Online Operation of a P300-Based Brain-Computer Interface.
Front Hum Neurosci. 2021 Feb 26;15:612777. doi: 10.3389/fnhum.2021.612777. eCollection 2021.
7
Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.
J Neurosci Methods. 2016 Mar 1;261:47-61. doi: 10.1016/j.jneumeth.2015.11.026. Epub 2015 Dec 11.
8
Analysis of Prefrontal Single-Channel EEG Data for Portable Auditory ERP-Based Brain-Computer Interfaces.
Front Hum Neurosci. 2019 Jul 25;13:250. doi: 10.3389/fnhum.2019.00250. eCollection 2019.
9
An Auditory-Tactile Visual Saccade-Independent P300 Brain-Computer Interface.
Int J Neural Syst. 2016 Feb;26(1):1650001. doi: 10.1142/S0129065716500015. Epub 2015 Nov 4.
10
Brain-computer interface with rapid serial multimodal presentation using artificial facial images and voice.
Comput Biol Med. 2021 Sep;136:104685. doi: 10.1016/j.compbiomed.2021.104685. Epub 2021 Jul 27.

引用本文的文献

1
Musical auditory feedback BCI: clinical pilot study of the Encephalophone.
Front Hum Neurosci. 2025 Jun 16;19:1592640. doi: 10.3389/fnhum.2025.1592640. eCollection 2025.
2
Four-class ASME BCI: investigation of the feasibility and comparison of two strategies for multiclassing.
Front Hum Neurosci. 2024 Nov 26;18:1461960. doi: 10.3389/fnhum.2024.1461960. eCollection 2024.

本文引用的文献

1
Evaluation of Artifact Subspace Reconstruction for Automatic Artifact Components Removal in Multi-Channel EEG Recordings.
IEEE Trans Biomed Eng. 2020 Apr;67(4):1114-1121. doi: 10.1109/TBME.2019.2930186. Epub 2019 Jul 22.
2
Towards BCI-Based Interfaces for Augmented Reality: Feasibility, Design and Evaluation.
IEEE Trans Vis Comput Graph. 2020 Mar;26(3):1608-1621. doi: 10.1109/TVCG.2018.2873737. Epub 2018 Oct 4.
3
Usage of drip drops as stimuli in an auditory P300 BCI paradigm.
Cogn Neurodyn. 2018 Feb;12(1):85-94. doi: 10.1007/s11571-017-9456-y. Epub 2017 Nov 16.
4
Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback.
Front Neurorobot. 2017 Oct 31;11:60. doi: 10.3389/fnbot.2017.00060. eCollection 2017.
5
Synchronised vestibular signals increase the P300 event-related potential elicited by auditory oddballs.
Brain Res. 2016 Oct 1;1648(Pt A):224-231. doi: 10.1016/j.brainres.2016.07.019. Epub 2016 Jul 14.
6
Real-Time Neuroimaging and Cognitive Monitoring Using Wearable Dry EEG.
IEEE Trans Biomed Eng. 2015 Nov;62(11):2553-67. doi: 10.1109/TBME.2015.2481482. Epub 2015 Sep 23.
7
Training leads to increased auditory brain-computer interface performance of end-users with motor impairments.
Clin Neurophysiol. 2016 Feb;127(2):1288-1296. doi: 10.1016/j.clinph.2015.08.007. Epub 2015 Aug 29.
8
The PREP pipeline: standardized preprocessing for large-scale EEG analysis.
Front Neuroinform. 2015 Jun 18;9:16. doi: 10.3389/fninf.2015.00016. eCollection 2015.
9
Effects of training and motivation on auditory P300 brain-computer interface performance.
Clin Neurophysiol. 2016 Jan;127(1):379-387. doi: 10.1016/j.clinph.2015.04.054. Epub 2015 Apr 17.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验