• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种主题转移神经网络融合了生成器和欧几里得对齐,用于基于脑电图的运动想象分类。

A subject transfer neural network fuses Generator and Euclidean alignment for EEG-based motor imagery classification.

作者信息

Xie Chengqiang, Wang Li, Yang Jiafeng, Guo Jiaying

机构信息

School of Electronics and Communication Engineering, Guangzhou University, Guangzhou 510006, China.

School of Electronics and Communication Engineering, Guangzhou University, Guangzhou 510006, China.

出版信息

J Neurosci Methods. 2025 Aug;420:110483. doi: 10.1016/j.jneumeth.2025.110483. Epub 2025 May 9.

DOI:10.1016/j.jneumeth.2025.110483
PMID:40350042
Abstract

BACKGROUND

Brain-computer interface (BCI) facilitates the connection between human brain and computer, enabling individuals to control external devices indirectly through cognitive processes. Although it has great development prospects, the significant difference in EEG signals among individuals hinders users from further utilizing the BCI system.

NEW METHOD

Addressing this difference and improving BCI classification accuracy remain key challenges. In this paper, we propose a transfer learning model based on deep learning to transfer the data distribution from the source domain to the target domain, named a subject transfer neural network combining the Generator with Euclidean alignment (ST-GENN). It consists of three parts: 1) Align the original EEG signals in the Euclidean space; 2) Send the aligned data to the Generator to obtain the transferred features; 3) Utilize the Convolution-attention-temporal (CAT) classifier to classify the transferred features.

RESULTS

The model is validated on BCI competition IV 2a, BCI competition IV 2b and SHU datasets to evaluate its classification performance, and the results are 82.85 %, 86.28 % and 67.2 % for the three datasets, respectively.

COMPARISON WITH EXISTING METHODS

The results have been shown to be robust to subject variability, with the average accuracy of the proposed method outperforming baseline algorithms by ranging from 2.03 % to 15.43 % on the 2a dataset, from 0.86 % to 10.16 % on the 2b dataset and from 3.3 % to 17.9 % on the SHU dataset.

CONCLUSIONS FOR RESEARCH ARTICLES

The advantage of our model lies in its ability to effectively transfer the experience and knowledge of the source domain data to the target domain, thus bridging the gap between them. Our method can improve the practicability of MI-BCI systems.

摘要

背景

脑机接口(BCI)促进了人脑与计算机之间的连接,使个体能够通过认知过程间接控制外部设备。尽管它具有巨大的发展前景,但个体之间脑电图(EEG)信号的显著差异阻碍了用户进一步利用BCI系统。

新方法

解决这一差异并提高BCI分类准确率仍然是关键挑战。在本文中,我们提出了一种基于深度学习的迁移学习模型,用于将数据分布从源域转移到目标域,名为结合生成器与欧几里得对齐的主题迁移神经网络(ST-GENN)。它由三个部分组成:1)在欧几里得空间中对齐原始EEG信号;2)将对齐后的数据发送到生成器以获得迁移特征;3)利用卷积-注意力-时间(CAT)分类器对迁移特征进行分类。

结果

该模型在BCI竞赛IV 2a、BCI竞赛IV 2b和上海大学(SHU)数据集上进行了验证,以评估其分类性能,三个数据集的结果分别为82.85%、86.28%和67.2%。

与现有方法的比较

结果表明,该方法对个体差异具有鲁棒性,在2a数据集上,所提方法的平均准确率比基线算法高出2.03%至15.43%;在2b数据集上高出0.86%至10.16%;在SHU数据集上高出3.3%至17.9%。

研究文章结论

我们模型的优势在于能够有效地将源域数据的经验和知识转移到目标域,从而弥合它们之间的差距。我们的方法可以提高运动想象脑机接口(MI-BCI)系统的实用性。

相似文献

1
A subject transfer neural network fuses Generator and Euclidean alignment for EEG-based motor imagery classification.一种主题转移神经网络融合了生成器和欧几里得对齐,用于基于脑电图的运动想象分类。
J Neurosci Methods. 2025 Aug;420:110483. doi: 10.1016/j.jneumeth.2025.110483. Epub 2025 May 9.
2
Golden subject is everyone: A subject transfer neural network for motor imagery-based brain computer interfaces.金本位是每个人:基于运动想象的脑机接口的主题转移神经网络。
Neural Netw. 2022 Jul;151:111-120. doi: 10.1016/j.neunet.2022.03.025. Epub 2022 Mar 29.
3
A Novel 3D Approach with a CNN and Swin Transformer for Decoding EEG-Based Motor Imagery Classification.一种结合卷积神经网络(CNN)和Swin Transformer的新型三维方法,用于解码基于脑电图的运动想象分类。
Sensors (Basel). 2025 May 5;25(9):2922. doi: 10.3390/s25092922.
4
Multi-scale self-attention approach for analysing motor imagery signals in brain-computer interfaces.多尺度自注意力方法在脑机接口中分析运动想象信号。
J Neurosci Methods. 2024 Aug;408:110182. doi: 10.1016/j.jneumeth.2024.110182. Epub 2024 May 23.
5
Enhanced electroencephalogram signal classification: A hybrid convolutional neural network with attention-based feature selection.增强型脑电图信号分类:一种基于注意力特征选择的混合卷积神经网络。
Brain Res. 2025 Mar 15;1851:149484. doi: 10.1016/j.brainres.2025.149484. Epub 2025 Feb 2.
6
Subject adaptation convolutional neural network for EEG-based motor imagery classification.基于脑电的运动想象的主题适应卷积神经网络分类。
J Neural Eng. 2022 Nov 8;19(6). doi: 10.1088/1741-2552/ac9c94.
7
Multi-scale spatiotemporal attention network for neuron based motor imagery EEG classification.基于神经元的运动想象 EEG 分类的多尺度时空注意网络。
J Neurosci Methods. 2024 Jun;406:110128. doi: 10.1016/j.jneumeth.2024.110128. Epub 2024 Mar 28.
8
An end-to-end multi-task motor imagery EEG classification neural network based on dynamic fusion of spectral-temporal features.基于频谱-时频特征动态融合的端到端多任务运动想象 EEG 分类神经网络。
Comput Biol Med. 2024 Aug;178:108727. doi: 10.1016/j.compbiomed.2024.108727. Epub 2024 Jun 8.
9
Attention-based convolutional neural network with multi-modal temporal information fusion for motor imagery EEG decoding.基于注意力的卷积神经网络与多模态时间信息融合在运动想象 EEG 解码中的应用。
Comput Biol Med. 2024 Jun;175:108504. doi: 10.1016/j.compbiomed.2024.108504. Epub 2024 Apr 24.
10
Attention-Based DSC-ConvLSTM for Multiclass Motor Imagery Classification.基于注意力机制的 DSC-ConvLSTM 多类运动想象分类
Comput Intell Neurosci. 2022 May 5;2022:8187009. doi: 10.1155/2022/8187009. eCollection 2022.