• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于注意力的卷积神经网络与多模态时间信息融合在运动想象 EEG 解码中的应用。

Attention-based convolutional neural network with multi-modal temporal information fusion for motor imagery EEG decoding.

机构信息

School of Automation Science and Electrical Engineering, Beihang University, Beijing, China; Hangzhou Innovation Institute, Beihang University, Hangzhou, China.

School of Electrical Engineering and Automation, Anhui University, Hefei, China.

出版信息

Comput Biol Med. 2024 Jun;175:108504. doi: 10.1016/j.compbiomed.2024.108504. Epub 2024 Apr 24.

DOI:10.1016/j.compbiomed.2024.108504
PMID:38701593
Abstract

Convolutional neural network (CNN) has been widely applied in motor imagery (MI)-based brain computer interface (BCI) to decode electroencephalography (EEG) signals. However, due to the limited perceptual field of convolutional kernel, CNN only extracts features from local region without considering long-term dependencies for EEG decoding. Apart from long-term dependencies, multi-modal temporal information is equally important for EEG decoding because it can offer a more comprehensive understanding of the temporal dynamics of neural processes. In this paper, we propose a novel deep learning network that combines CNN with self-attention mechanism to encapsulate multi-modal temporal information and global dependencies. The network first extracts multi-modal temporal information from two distinct perspectives: average and variance. A shared self-attention module is then designed to capture global dependencies along these two feature dimensions. We further design a convolutional encoder to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features. Moreover, a data augmentation method called signal segmentation and recombination is proposed to improve the generalization capability of the proposed network. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and BCI Competition IV-2b (BCIC-IV-2b) datasets show that our proposed method outperforms the state-of-the-art methods and achieves 4-class average accuracy of 85.03% on the BCIC-IV-2a dataset. The proposed method implies the effectiveness of multi-modal temporal information fusion in attention-based deep learning networks and provides a new perspective for MI-EEG decoding. The code is available at https://github.com/Ma-Xinzhi/EEG-TransNet.

摘要

卷积神经网络(CNN)已广泛应用于基于运动想象(MI)的脑机接口(BCI),以解码脑电图(EEG)信号。然而,由于卷积核的感知域有限,CNN 只能从局部区域提取特征,而不考虑 EEG 解码的长期依赖关系。除了长期依赖关系外,多模态时间信息对于 EEG 解码同样重要,因为它可以提供对神经过程时间动态的更全面理解。在本文中,我们提出了一种新的深度学习网络,该网络将 CNN 与自注意力机制相结合,以封装多模态时间信息和全局依赖性。该网络首先从两个不同的角度提取多模态时间信息:平均值和方差。然后设计了一个共享的自注意力模块来捕获沿这两个特征维度的全局依赖性。我们进一步设计了一个卷积编码器来探索平均池化和方差池化特征之间的关系,并将它们融合为更具判别力的特征。此外,还提出了一种称为信号分段和重组的数据增强方法,以提高所提出网络的泛化能力。在 BCI 竞赛 IV-2a(BCIC-IV-2a)和 BCI 竞赛 IV-2b(BCIC-IV-2b)数据集上的实验结果表明,我们的方法优于最新方法,在 BCIC-IV-2a 数据集上达到了 4 类平均准确率 85.03%。该方法表明在基于注意力的深度学习网络中融合多模态时间信息的有效性,并为 MI-EEG 解码提供了新的视角。代码可在 https://github.com/Ma-Xinzhi/EEG-TransNet 上获得。

相似文献

1
Attention-based convolutional neural network with multi-modal temporal information fusion for motor imagery EEG decoding.基于注意力的卷积神经网络与多模态时间信息融合在运动想象 EEG 解码中的应用。
Comput Biol Med. 2024 Jun;175:108504. doi: 10.1016/j.compbiomed.2024.108504. Epub 2024 Apr 24.
2
A Temporal Dependency Learning CNN With Attention Mechanism for MI-EEG Decoding.具有注意力机制的时间依赖学习 CNN 用于 MI-EEG 解码。
IEEE Trans Neural Syst Rehabil Eng. 2023;31:3188-3200. doi: 10.1109/TNSRE.2023.3299355. Epub 2023 Aug 9.
3
Multi-scale convolutional transformer network for motor imagery brain-computer interface.用于运动想象脑机接口的多尺度卷积变压器网络
Sci Rep. 2025 Apr 15;15(1):12935. doi: 10.1038/s41598-025-96611-5.
4
SMANet: A Model Combining SincNet, Multi-Branch Spatial-Temporal CNN, and Attention Mechanism for Motor Imagery BCI.SMANet:一种结合SincNet、多分支时空卷积神经网络和注意力机制的运动想象脑机接口模型
IEEE Trans Neural Syst Rehabil Eng. 2025;33:1497-1508. doi: 10.1109/TNSRE.2025.3560993. Epub 2025 Apr 29.
5
MI-Mamba: A hybrid motor imagery electroencephalograph classification model with Mamba's global scanning.MI-曼巴:一种采用曼巴全局扫描的混合运动想象脑电图分类模型。
Ann N Y Acad Sci. 2025 Feb;1544(1):242-253. doi: 10.1111/nyas.15288. Epub 2025 Jan 22.
6
MACNet: A Multidimensional Attention-Based Convolutional Neural Network for Lower-Limb Motor Imagery Classification.MACNet:一种基于多维注意力的卷积神经网络,用于下肢运动想象分类。
Sensors (Basel). 2024 Nov 28;24(23):7611. doi: 10.3390/s24237611.
7
Multiscale Spatial-Temporal Feature Fusion Neural Network for Motor Imagery Brain-Computer Interfaces.用于运动想象脑机接口的多尺度时空特征融合神经网络
IEEE J Biomed Health Inform. 2025 Jan;29(1):198-209. doi: 10.1109/JBHI.2024.3472097. Epub 2025 Jan 7.
8
CTNet: a convolutional transformer network for EEG-based motor imagery classification.CTNet:基于 EEG 的运动想象分类的卷积变压器网络。
Sci Rep. 2024 Aug 30;14(1):20237. doi: 10.1038/s41598-024-71118-7.
9
ADFCNN: Attention-Based Dual-Scale Fusion Convolutional Neural Network for Motor Imagery Brain-Computer Interface.基于注意力的双尺度融合卷积神经网络在运动想象脑-机接口中的应用
IEEE Trans Neural Syst Rehabil Eng. 2024;32:154-165. doi: 10.1109/TNSRE.2023.3342331. Epub 2024 Jan 15.
10
Adaptive GCN and Bi-GRU-Based Dual Branch for Motor Imagery EEG Decoding.基于自适应图卷积网络和双向门控循环单元的双分支运动想象脑电信号解码方法
Sensors (Basel). 2025 Feb 13;25(4):1147. doi: 10.3390/s25041147.

引用本文的文献

1
GAH-TNet: A Graph Attention-Based Hierarchical Temporal Network for EEG Motor Imagery Decoding.GAH-TNet:一种基于图注意力的层次化时间网络,用于脑电运动想象解码。
Brain Sci. 2025 Aug 19;15(8):883. doi: 10.3390/brainsci15080883.
2
MCTGNet: A Multi-Scale Convolution and Hybrid Attention Network for Robust Motor Imagery EEG Decoding.MCTGNet:一种用于稳健运动想象脑电信号解码的多尺度卷积与混合注意力网络。
Bioengineering (Basel). 2025 Jul 17;12(7):775. doi: 10.3390/bioengineering12070775.
3
Mifnet: a MamBa-based interactive frequency convolutional neural network for motor imagery decoding.
Mifnet:一种基于MamBa的用于运动想象解码的交互式频率卷积神经网络。
Cogn Neurodyn. 2025 Dec;19(1):106. doi: 10.1007/s11571-025-10287-1. Epub 2025 Jun 30.
4
Towards decoding motor imagery from EEG signal using optimized back propagation neural network with honey badger algorithm.利用基于蜜獾算法优化的反向传播神经网络从脑电图信号中解码运动想象。
Sci Rep. 2025 Jul 1;15(1):21202. doi: 10.1038/s41598-025-05423-0.
5
EA-EEG: a novel model for efficient motor imagery EEG classification with whitening and multi-scale feature integration.EA-EEG:一种用于高效运动想象脑电分类的新型模型,具有白化和多尺度特征整合功能。
Cogn Neurodyn. 2025 Dec;19(1):94. doi: 10.1007/s11571-025-10278-2. Epub 2025 Jun 17.
6
Feature-aware domain invariant representation learning for EEG motor imagery decoding.用于脑电图运动想象解码的特征感知域不变表示学习
Sci Rep. 2025 Mar 27;15(1):10664. doi: 10.1038/s41598-025-95178-5.
7
Generative Diffusion-Based Task Incremental Learning Method for Decoding Motor Imagery EEG.基于生成扩散的运动想象脑电信号解码任务增量学习方法
Brain Sci. 2025 Jan 21;15(2):98. doi: 10.3390/brainsci15020098.