Liao Wangdan, Liu Hongyun, Wang Weidong
School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China.
Medical Innovation Research Division, Chinese PLA General Hospital, Beijing, 100853, China.
Sci Rep. 2025 Jul 2;15(1):23380. doi: 10.1038/s41598-025-06364-4.
Brain-computer interfaces (BCIs) harness electroencephalographic signals for direct neural control of devices, offering significant benefits for individuals with motor impairments. Traditional machine learning methods for EEG-based motor imagery (MI) classification encounter challenges such as manual feature extraction and susceptibility to noise. This paper introduces EEGEncoder, a deep learning framework that employs modified transformers and Temporal Convolutional Networks (TCNs) to surmount these limitations. We propose a novel fusion architecture, named Dual-Stream Temporal-Spatial Block (DSTS), to capture temporal and spatial features, improving the accuracy of Motor Imagery classification task. Additionally, we use multiple parallel structures to enhance the model's performance. When tested on the BCI Competition IV-2a dataset, our proposed model achieved an average accuracy of 86.46% for subject dependent and average 74.48% for subject independent.
脑机接口(BCIs)利用脑电图信号对设备进行直接神经控制,为运动障碍患者带来了显著益处。基于脑电图的运动想象(MI)分类的传统机器学习方法面临诸如手动特征提取和易受噪声影响等挑战。本文介绍了EEGEncoder,这是一个深度学习框架,它采用经过修改的变压器和时间卷积网络(TCNs)来克服这些限制。我们提出了一种名为双流时空块(DSTS)的新型融合架构,以捕捉时间和空间特征,提高运动想象分类任务的准确性。此外,我们使用多个并行结构来增强模型的性能。在BCI竞赛IV-2a数据集上进行测试时,我们提出的模型在受试者依赖情况下的平均准确率达到了86.46%,在受试者独立情况下的平均准确率为74.48%。