Suppr超能文献

EEG-VTTCNet:一种基于视觉Transformer 和时间卷积网络的损失联合训练模型,用于基于脑电图的运动想象分类。

EEG-VTTCNet: A loss joint training model based on the vision transformer and the temporal convolution network for EEG-based motor imagery classification.

机构信息

The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China.

The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China.

出版信息

Neuroscience. 2024 Sep 25;556:42-51. doi: 10.1016/j.neuroscience.2024.07.051. Epub 2024 Aug 3.

Abstract

Brain-computer interface (BCI) is a technology that directly connects signals between the human brain and a computer or other external device. Motor imagery electroencephalographic (MI-EEG) signals are considered a promising paradigm for BCI systems, with a wide range of potential applications in medical rehabilitation, human-computer interaction, and virtual reality. Accurate decoding of MI-EEG signals poses a significant challenge due to issues related to the quality of the collected EEG data and subject variability. Therefore, developing an efficient MI-EEG decoding network is crucial and warrants research. This paper proposes a loss joint training model based on the vision transformer (VIT) and the temporal convolutional network (EEG-VTTCNet) to classify MI-EEG signals. To take advantage of multiple modules together, the EEG-VTTCNet adopts a shared convolution strategy and a dual-branching strategy. The dual-branching modules perform complementary learning and jointly train shared convolutional modules with better performance. We conducted experiments on the BCI Competition IV-2a and IV-2b datasets, and the proposed network outperformed the current state-of-the-art techniques with an accuracy of 84.58% and 90.94%, respectively, for the subject-dependent mode. In addition, we used t-SNE to visualize the features extracted by the proposed network, further demonstrating the effectiveness of the feature extraction framework. We also conducted extensive ablation and hyperparameter tuning experiments to construct a robust network architecture that can be well generalized.

摘要

脑机接口(BCI)是一种直接在人脑和计算机或其他外部设备之间建立信号连接的技术。运动想象脑电(MI-EEG)信号被认为是 BCI 系统的一种很有前途的范例,具有广泛的潜在应用,包括医疗康复、人机交互和虚拟现实。由于与 EEG 数据质量和个体差异相关的问题,MI-EEG 信号的准确解码是一个重大挑战。因此,开发高效的 MI-EEG 解码网络至关重要,值得研究。本文提出了一种基于视觉Transformer(VIT)和时间卷积网络(EEG-VTTCNet)的损失联合训练模型,用于对 MI-EEG 信号进行分类。为了充分利用多个模块,EEG-VTTCNet 采用了共享卷积策略和双分支策略。双分支模块进行互补学习,并共同训练具有更好性能的共享卷积模块。我们在 BCI 竞赛 IV-2a 和 IV-2b 数据集上进行了实验,所提出的网络在基于个体的模式下的准确率分别达到了 84.58%和 90.94%,优于当前的最先进技术。此外,我们使用 t-SNE 可视化了所提出网络提取的特征,进一步证明了特征提取框架的有效性。我们还进行了广泛的消融和超参数调整实验,构建了一个稳健的网络架构,可以很好地进行泛化。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验