• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

DSTA-Net:用于运动想象分类的动态时空特征增强网络。

DSTA-Net: dynamic spatio-temporal feature augmentation network for motor imagery classification.

作者信息

Chang Liang, Yang Banghua, Zhang Jiayang, Li Tie, Feng Juntao, Xu Wendong

机构信息

School of Mechatronic Engineering and Automation, Research Center of Brain-Computer Engineering, Shanghai University, Shanghai, 200444 China.

Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072 China.

出版信息

Cogn Neurodyn. 2025 Dec;19(1):118. doi: 10.1007/s11571-025-10296-0. Epub 2025 Jul 23.

DOI:10.1007/s11571-025-10296-0
PMID:40718596
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12286908/
Abstract

Accurate decoding and strong feature interpretability of Motor Imagery (MI) are expected to drive MI applications in stroke rehabilitation. However, the inherent nonstationarity and high intra-class variability of MI-EEG pose significant challenges in extracting reliable spatio-temporal features. We proposed the Dynamic Spatio-Temporal Feature Augmentation Network (DSTA-Net), which combines DSTA and the Spatio-Temporal Convolution (STC) modules. In DSTA module, multi-scale temporal convolutional kernels tailored to the α and β frequency bands of MI neurophysiological characteristics, while raw EEG serve as a baseline feature layer to retain original information. Next, Grouped Spatial Convolutions extract multi-level spatial features, combined with weight constraints to prevent overfitting. Spatial convolution kernels map EEG channel information into a new spatial domain, enabling further feature extraction through dimensional transformation. And STC module further extracts features and conducts classification. We evaluated DSTA-Net on three public datasets and applied it to a self-collected stroke dataset. In tenfold cross-validation, DSTA-Net achieved average accuracy improvements of 6.29% ( < 0.01), 3.05% ( < 0.01), 5.26% ( < 0.01), and 2.25% over the ShallowConvNet on the BCI-IV-2a, OpenBMI, CASIA, and stroke dataset, respectively. In hold-out validation, DSTA-Net achieved average accuracy improvements of 3.99% ( < 0.01) and 4.2% ( < 0.01) over the ShallowConvNet on the OpenBMI and CASIA datasets, respectively. Finally, we applied DeepLIFT, Common Spatial Pattern, and t-SNE to analyze the contributions of individual EEG channels, extract spatial patterns, and visualize features. The superiority of DSTA-Net offers new insights for further research and application in MI. The code is available in https://github.com/CL-Cloud-BCI/DSTANet-code.

摘要

运动想象(MI)的准确解码和强大的特征可解释性有望推动其在中风康复中的应用。然而,MI-EEG固有的非平稳性和高类内变异性在提取可靠的时空特征方面带来了重大挑战。我们提出了动态时空特征增强网络(DSTA-Net),它结合了DSTA和时空卷积(STC)模块。在DSTA模块中,针对MI神经生理特征的α和β频段定制了多尺度时间卷积核,而原始脑电图作为基线特征层以保留原始信息。接下来,分组空间卷积提取多级空间特征,并结合权重约束以防止过拟合。空间卷积核将脑电图通道信息映射到一个新的空间域,通过维度变换实现进一步的特征提取。并且STC模块进一步提取特征并进行分类。我们在三个公共数据集上评估了DSTA-Net,并将其应用于自行收集的中风数据集。在十折交叉验证中,DSTA-Net在BCI-IV-2a、OpenBMI、CASIA和中风数据集上分别比浅卷积网络(ShallowConvNet)平均准确率提高了6.29%(<0.01)、3.05%(<0.01)、5.26%(<0.01)和2.25%。在留出验证中,DSTA-Net在OpenBMI和CASIA数据集上分别比浅卷积网络平均准确率提高了3.99%(<0.01)和4.2%(<0.01)。最后,我们应用DeepLIFT、共同空间模式和t-SNE来分析各个脑电图通道的贡献、提取空间模式并可视化特征。DSTA-Net的优越性为MI的进一步研究和应用提供了新的见解。代码可在https://github.com/CL-Cloud-BCI/DSTANet-code获取。

相似文献

1
DSTA-Net: dynamic spatio-temporal feature augmentation network for motor imagery classification.DSTA-Net:用于运动想象分类的动态时空特征增强网络。
Cogn Neurodyn. 2025 Dec;19(1):118. doi: 10.1007/s11571-025-10296-0. Epub 2025 Jul 23.
2
A feature fusion network with spatial-temporal-enhanced strategy for the motor imagery of force intensity variation.一种具有时空增强策略的特征融合网络,用于力强度变化的运动想象。
Front Neurosci. 2025 Jun 20;19:1591398. doi: 10.3389/fnins.2025.1591398. eCollection 2025.
3
Multiscale Spatial-Temporal Feature Fusion Neural Network for Motor Imagery Brain-Computer Interfaces.用于运动想象脑机接口的多尺度时空特征融合神经网络
IEEE J Biomed Health Inform. 2025 Jan;29(1):198-209. doi: 10.1109/JBHI.2024.3472097. Epub 2025 Jan 7.
4
A transformer-based network with second-order pooling for motor imagery EEG classification.一种用于运动想象脑电信号分类的基于二阶池化的变压器网络。
J Neural Eng. 2025 Jul 2. doi: 10.1088/1741-2552/adeae8.
5
Mifnet: a MamBa-based interactive frequency convolutional neural network for motor imagery decoding.Mifnet:一种基于MamBa的用于运动想象解码的交互式频率卷积神经网络。
Cogn Neurodyn. 2025 Dec;19(1):106. doi: 10.1007/s11571-025-10287-1. Epub 2025 Jun 30.
6
Adaptive filter of frequency bands based coordinate attention network for EEG-based motor imagery classification.基于脑电图的运动想象分类的基于频带坐标注意力网络的自适应滤波器
Health Inf Sci Syst. 2024 Feb 23;12(1):11. doi: 10.1007/s13755-024-00270-1. eCollection 2024 Dec.
7
A hybrid approach for EEG motor imagery classification using adaptive margin disparity and knowledge transfer in convolutional neural networks.一种在卷积神经网络中使用自适应边缘差异和知识转移的脑电图运动想象分类混合方法。
Comput Biol Med. 2025 Sep;195:110675. doi: 10.1016/j.compbiomed.2025.110675. Epub 2025 Jun 29.
8
MFRC-Net: Multi-Scale Feature Residual Convolutional Neural Network for Motor Imagery Decoding.MFRC-Net:用于运动想象解码的多尺度特征残差卷积神经网络
IEEE J Biomed Health Inform. 2025 Jan;29(1):224-234. doi: 10.1109/JBHI.2024.3467090. Epub 2025 Jan 7.
9
DMSACNN: Deep Multiscale Attentional Convolutional Neural Network for EEG-Based Motor Decoding.DMSACNN:用于基于脑电图的运动解码的深度多尺度注意力卷积神经网络
IEEE J Biomed Health Inform. 2025 Jul;29(7):4884-4896. doi: 10.1109/JBHI.2025.3546288.
10
Multi-level channel-spatial attention and light-weight scale-fusion network (MCSLF-Net): multi-level channel-spatial attention and light-weight scale-fusion transformer for 3D brain tumor segmentation.多级通道空间注意力与轻量级尺度融合网络(MCSLF-Net):用于3D脑肿瘤分割的多级通道空间注意力与轻量级尺度融合变换器
Quant Imaging Med Surg. 2025 Jul 1;15(7):6301-6325. doi: 10.21037/qims-2025-354. Epub 2025 Jun 30.