• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于特征融合和注意力机制的运动想象脑电分类研究

A study of motor imagery EEG classification based on feature fusion and attentional mechanisms.

作者信息

Zhu Tingting, Tang Hailin, Jiang Lei, Li Yijia, Li Shijun, Wu Zhijian

机构信息

School of Big Data and Computing, Guangdong Baiyun University, Guangzhou, China.

Dropbox Inc., San Francisco, CA, United States.

出版信息

Front Hum Neurosci. 2025 Jul 16;19:1611229. doi: 10.3389/fnhum.2025.1611229. eCollection 2025.

DOI:10.3389/fnhum.2025.1611229
PMID:40741298
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12307489/
Abstract

INTRODUCTION

Motor imagery EEG-based action recognition is an emerging field arising from the intersection of brain science and information science, which has promising applications in the fields of neurorehabilitation and human-computer collaboration. However, existing methods face challenges including the low signal-to-noise ratio of EEG signals, inter-subject variability, and model overfitting.

METHODS

We propose HA-FuseNet, an end-to-end motor imagery action classification network. This model integrates feature fusion and attention mechanisms to classify left hand, right hand, foot, and tongue movements. Its innovations include: (1) multi-scale dense connectivity, (2) hybrid attention mechanism, (3) global self-attention module, and (4) lightweight design for reduced computational overhead.

RESULTS

On BCI Competition IV Dataset 2A, HA-FuseNet achieved 77.89% average within-subject accuracy (8.42% higher than EEGNet) and 68.53% cross-subject accuracy.

CONCLUSION

The model demonstrates robustness to spatial resolution variations and individual differences, effectively mitigating key challenges in motor imagery EEG classification.

摘要

引言

基于运动想象脑电图的动作识别是一个源于脑科学与信息科学交叉领域的新兴领域,在神经康复和人机协作等领域有着广阔的应用前景。然而,现有方法面临着脑电图信号信噪比低、个体间差异以及模型过拟合等挑战。

方法

我们提出了HA-FuseNet,一种端到端的运动想象动作分类网络。该模型集成了特征融合和注意力机制,用于对左手、右手、足部和舌头运动进行分类。其创新点包括:(1)多尺度密集连接,(2)混合注意力机制,(3)全局自注意力模块,以及(4)轻量化设计以减少计算开销。

结果

在BCI竞赛IV数据集2A上,HA-FuseNet在受试者内平均准确率达到77.89%(比EEGNet高8.42%),跨受试者准确率达到68.53%。

结论

该模型对空间分辨率变化和个体差异具有鲁棒性,有效缓解了运动想象脑电图分类中的关键挑战。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/885540f0b429/fnhum-19-1611229-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/c0b7a283f317/fnhum-19-1611229-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/3f3ba80655eb/fnhum-19-1611229-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/cef9e67e2dde/fnhum-19-1611229-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/40553ae2d9d0/fnhum-19-1611229-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/056b4fd01637/fnhum-19-1611229-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/de5992606668/fnhum-19-1611229-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/885540f0b429/fnhum-19-1611229-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/c0b7a283f317/fnhum-19-1611229-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/3f3ba80655eb/fnhum-19-1611229-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/cef9e67e2dde/fnhum-19-1611229-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/40553ae2d9d0/fnhum-19-1611229-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/056b4fd01637/fnhum-19-1611229-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/de5992606668/fnhum-19-1611229-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8abe/12307489/885540f0b429/fnhum-19-1611229-g007.jpg

相似文献

1
A study of motor imagery EEG classification based on feature fusion and attentional mechanisms.基于特征融合和注意力机制的运动想象脑电分类研究
Front Hum Neurosci. 2025 Jul 16;19:1611229. doi: 10.3389/fnhum.2025.1611229. eCollection 2025.
2
Multiscale Spatial-Temporal Feature Fusion Neural Network for Motor Imagery Brain-Computer Interfaces.用于运动想象脑机接口的多尺度时空特征融合神经网络
IEEE J Biomed Health Inform. 2025 Jan;29(1):198-209. doi: 10.1109/JBHI.2024.3472097. Epub 2025 Jan 7.
3
A transformer-based network with second-order pooling for motor imagery EEG classification.一种用于运动想象脑电信号分类的基于二阶池化的变压器网络。
J Neural Eng. 2025 Jul 2. doi: 10.1088/1741-2552/adeae8.
4
Classification of finger movements through optimal EEG channel and feature selection.通过最优脑电图通道和特征选择对手指运动进行分类。
Front Hum Neurosci. 2025 Jul 16;19:1633910. doi: 10.3389/fnhum.2025.1633910. eCollection 2025.
5
Adaptive filter of frequency bands based coordinate attention network for EEG-based motor imagery classification.基于脑电图的运动想象分类的基于频带坐标注意力网络的自适应滤波器
Health Inf Sci Syst. 2024 Feb 23;12(1):11. doi: 10.1007/s13755-024-00270-1. eCollection 2024 Dec.
6
Mifnet: a MamBa-based interactive frequency convolutional neural network for motor imagery decoding.Mifnet:一种基于MamBa的用于运动想象解码的交互式频率卷积神经网络。
Cogn Neurodyn. 2025 Dec;19(1):106. doi: 10.1007/s11571-025-10287-1. Epub 2025 Jun 30.
7
A feature fusion network with spatial-temporal-enhanced strategy for the motor imagery of force intensity variation.一种具有时空增强策略的特征融合网络,用于力强度变化的运动想象。
Front Neurosci. 2025 Jun 20;19:1591398. doi: 10.3389/fnins.2025.1591398. eCollection 2025.
8
Motor imagery EEG signal classification using novel deep learning algorithm.基于新型深度学习算法的运动想象脑电信号分类
Sci Rep. 2025 Jul 8;15(1):24539. doi: 10.1038/s41598-025-00824-7.
9
A hybrid approach for EEG motor imagery classification using adaptive margin disparity and knowledge transfer in convolutional neural networks.一种在卷积神经网络中使用自适应边缘差异和知识转移的脑电图运动想象分类混合方法。
Comput Biol Med. 2025 Sep;195:110675. doi: 10.1016/j.compbiomed.2025.110675. Epub 2025 Jun 29.
10
A Novel Recognition and Classification Approach for Motor Imagery Based on Spatio-Temporal Features.一种基于时空特征的运动想象新型识别与分类方法。
IEEE J Biomed Health Inform. 2025 Jan;29(1):210-223. doi: 10.1109/JBHI.2024.3464550. Epub 2025 Jan 7.

本文引用的文献

1
Toward the enhancement of affective brain-computer interfaces using dependence within EEG series.利用脑电图系列中的相关性增强情感脑机接口。
J Neural Eng. 2025 Apr 1;22(2). doi: 10.1088/1741-2552/adbfc0.
2
Fusion of Multi-domain EEG Signatures Improves Emotion Recognition.多域 EEG 特征融合可提高情绪识别能力。
J Integr Neurosci. 2024 Jan 19;23(1):18. doi: 10.31083/j.jin2301018.
3
Europe spent €600 million to recreate the human brain in a computer. How did it go?欧洲花费了6亿欧元在计算机中重建人类大脑。进展如何?
Nature. 2023 Aug;620(7975):718-720. doi: 10.1038/d41586-023-02600-x.
4
LMDA-Net:A lightweight multi-dimensional attention network for general EEG-based brain-computer interfaces and interpretability.LMDA-Net:一种用于通用基于 EEG 的脑机接口和可解释性的轻量级多维注意力网络。
Neuroimage. 2023 Aug 1;276:120209. doi: 10.1016/j.neuroimage.2023.120209. Epub 2023 Jun 2.
5
EEG-Channel-Temporal-Spectral-Attention Correlation for Motor Imagery EEG Classification.脑电通道时频注意相关在运动想象脑电分类中的应用。
IEEE Trans Neural Syst Rehabil Eng. 2023;31:1659-1669. doi: 10.1109/TNSRE.2023.3255233.
6
IFNet: An Interactive Frequency Convolutional Neural Network for Enhancing Motor Imagery Decoding From EEG.IFNet:一种用于增强 EEG 中运动想象解码的交互式频率卷积神经网络。
IEEE Trans Neural Syst Rehabil Eng. 2023;31:1900-1911. doi: 10.1109/TNSRE.2023.3257319.
7
EEG Conformer: Convolutional Transformer for EEG Decoding and Visualization.脑电图适配模型:用于脑电图解码与可视化的卷积变换器
IEEE Trans Neural Syst Rehabil Eng. 2023;31:710-719. doi: 10.1109/TNSRE.2022.3230250. Epub 2023 Feb 2.
8
Contextual Transformer Networks for Visual Recognition.用于视觉识别的上下文Transformer网络
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):1489-1500. doi: 10.1109/TPAMI.2022.3164083. Epub 2023 Jan 6.
9
A Tensor-Based Frequency Features Combination Method for Brain-Computer Interfaces.基于张量的脑机接口频率特征组合方法。
IEEE Trans Neural Syst Rehabil Eng. 2022;30:465-475. doi: 10.1109/TNSRE.2021.3125386. Epub 2022 Mar 8.
10
EEG-inception: an accurate and robust end-to-end neural network for EEG-based motor imagery classification.EEG-inception:一种基于 EEG 的运动想象分类的准确稳健的端到端神经网络。
J Neural Eng. 2021 Mar 25;18(4). doi: 10.1088/1741-2552/abed81.