Suppr超能文献

基于脑电的特征分类,结合三维卷积神经网络和生成对抗网络,用于运动想象。

EEG-Based Feature Classification Combining 3D-Convolutional Neural Networks with Generative Adversarial Networks for Motor Imagery.

机构信息

School of Mechatronic Engineering and Automation, School of Medicine, Research Center of Brain Computer Engineering, 200444 Shanghai, China.

School of Medical Instrument, Shanghai University of Medicine & Health Science, 201318 Shanghai, China.

出版信息

J Integr Neurosci. 2024 Aug 20;23(8):153. doi: 10.31083/j.jin2308153.

Abstract

BACKGROUND

The adoption of convolutional neural networks (CNNs) for decoding electroencephalogram (EEG)-based motor imagery (MI) in brain-computer interfaces has significantly increased recently. The effective extraction of motor imagery features is vital due to the variability among individuals and temporal states.

METHODS

This study introduces a novel network architecture, 3D-convolutional neural network-generative adversarial network (3D-CNN-GAN), for decoding both within-session and cross-session motor imagery. Initially, EEG signals were extracted over various time intervals using a sliding window technique, capturing temporal, frequency, and phase features to construct a temporal-frequency-phase feature (TFPF) three-dimensional feature map. Generative adversarial networks (GANs) were then employed to synthesize artificial data, which, when combined with the original datasets, expanded the data capacity and enhanced functional connectivity. Moreover, GANs proved capable of learning and amplifying the brain connectivity patterns present in the existing data, generating more distinctive brain network features. A compact, two-layer 3D-CNN model was subsequently developed to efficiently decode these TFPF features.

RESULTS

Taking into account session and individual differences in EEG data, tests were conducted on both the public GigaDB dataset and the SHU laboratory dataset. On the GigaDB dataset, our 3D-CNN and 3D-CNN-GAN models achieved two-class within-session motor imagery accuracies of 76.49% and 77.03%, respectively, demonstrating the algorithm's effectiveness and the improvement provided by data augmentation. Furthermore, on the SHU dataset, the 3D-CNN and 3D-CNN-GAN models yielded two-class within-session motor imagery accuracies of 67.64% and 71.63%, and cross-session motor imagery accuracies of 58.06% and 63.04%, respectively.

CONCLUSIONS

The 3D-CNN-GAN algorithm significantly enhances the generalizability of EEG-based motor imagery brain-computer interfaces (BCIs). Additionally, this research offers valuable insights into the potential applications of motor imagery BCIs.

摘要

背景

卷积神经网络(CNNs)在脑机接口中解码基于脑电图(EEG)的运动想象(MI)的应用最近有了显著的增长。由于个体之间和时间状态的变化,有效提取运动想象特征至关重要。

方法

本研究引入了一种新的网络架构,三维卷积神经网络-生成对抗网络(3D-CNN-GAN),用于解码内会话和跨会话运动想象。首先,使用滑动窗口技术从各个时间间隔提取 EEG 信号,捕捉时间、频率和相位特征,构建时间-频率-相位特征(TFPF)三维特征图。然后,使用生成对抗网络(GANs)合成人工数据,与原始数据集结合,扩大数据容量并增强功能连接。此外,GANs 能够学习和放大现有数据中的大脑连接模式,生成更具特色的大脑网络特征。随后,开发了一个紧凑的两层 3D-CNN 模型,以有效地解码这些 TFPF 特征。

结果

考虑到 EEG 数据中的会话和个体差异,我们在公共 GigaDB 数据集和 SHU 实验室数据集上进行了测试。在 GigaDB 数据集上,我们的 3D-CNN 和 3D-CNN-GAN 模型在两类内会话运动想象中的准确率分别为 76.49%和 77.03%,证明了算法的有效性和数据增强提供的改进。此外,在 SHU 数据集上,3D-CNN 和 3D-CNN-GAN 模型在两类内会话运动想象中的准确率分别为 67.64%和 71.63%,跨会话运动想象中的准确率分别为 58.06%和 63.04%。

结论

3D-CNN-GAN 算法显著提高了基于 EEG 的运动想象脑机接口(BCIs)的泛化能力。此外,本研究为运动想象 BCIs 的潜在应用提供了有价值的见解。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验