Zhu Tingting, Tang Hailin, Jiang Lei, Li Yijia, Li Shijun, Wu Zhijian
School of Big Data and Computing, Guangdong Baiyun University, Guangzhou, China.
Dropbox Inc., San Francisco, CA, United States.
Front Hum Neurosci. 2025 Jul 16;19:1611229. doi: 10.3389/fnhum.2025.1611229. eCollection 2025.
Motor imagery EEG-based action recognition is an emerging field arising from the intersection of brain science and information science, which has promising applications in the fields of neurorehabilitation and human-computer collaboration. However, existing methods face challenges including the low signal-to-noise ratio of EEG signals, inter-subject variability, and model overfitting.
We propose HA-FuseNet, an end-to-end motor imagery action classification network. This model integrates feature fusion and attention mechanisms to classify left hand, right hand, foot, and tongue movements. Its innovations include: (1) multi-scale dense connectivity, (2) hybrid attention mechanism, (3) global self-attention module, and (4) lightweight design for reduced computational overhead.
On BCI Competition IV Dataset 2A, HA-FuseNet achieved 77.89% average within-subject accuracy (8.42% higher than EEGNet) and 68.53% cross-subject accuracy.
The model demonstrates robustness to spatial resolution variations and individual differences, effectively mitigating key challenges in motor imagery EEG classification.
基于运动想象脑电图的动作识别是一个源于脑科学与信息科学交叉领域的新兴领域,在神经康复和人机协作等领域有着广阔的应用前景。然而,现有方法面临着脑电图信号信噪比低、个体间差异以及模型过拟合等挑战。
我们提出了HA-FuseNet,一种端到端的运动想象动作分类网络。该模型集成了特征融合和注意力机制,用于对左手、右手、足部和舌头运动进行分类。其创新点包括:(1)多尺度密集连接,(2)混合注意力机制,(3)全局自注意力模块,以及(4)轻量化设计以减少计算开销。
在BCI竞赛IV数据集2A上,HA-FuseNet在受试者内平均准确率达到77.89%(比EEGNet高8.42%),跨受试者准确率达到68.53%。
该模型对空间分辨率变化和个体差异具有鲁棒性,有效缓解了运动想象脑电图分类中的关键挑战。