Suppr超能文献

MI-EEGNET:一种用于运动想象分类的新型卷积神经网络。

MI-EEGNET: A novel convolutional neural network for motor imagery classification.

机构信息

Laboratory of Computer Science, Faculty of Sciences and Technology, Hassan II University Of Casablanca, LIM@II-FSTM, B.P. 146, Mohammedia 20650, Morocco.

Laboratory of Computer Science, Faculty of Sciences and Technology, Hassan II University Of Casablanca, LIM@II-FSTM, B.P. 146, Mohammedia 20650, Morocco.

出版信息

J Neurosci Methods. 2021 Apr 1;353:109037. doi: 10.1016/j.jneumeth.2020.109037. Epub 2020 Dec 15.

Abstract

BACKGROUND

Brain-computer interfaces (BCI) permits humans to interact with machines by decoding brainwaves to command for a variety of purposes. Convolutional neural networks (ConvNet) have improved the state-of-the-art of motor imagery decoding in an end-to-end approach. However, shallow ConvNets usually perform better than their deep counterparts. Thus, we aim to design a novel ConvNet that is deeper than the existing models, with an increase in terms of performances, and with optimal complexity.

NEW METHOD

We develop a ConvNet based on Inception and Xception architectures that uses convolutional layers to extract temporal and spatial features. We adopt separable convolutions and depthwise convolutions to enable faster and efficient ConvNet. Then, we introduce a new block that is inspired by Inception to learn more rich features to improve the classification performances.

RESULTS

The obtained results are comparable with other state-of-the-art techniques. Also, the weights of the convolutional layers give us some insights onto the learned features and reveal the most relevant ones.

COMPARISON WITH EXISTING METHOD(S): We show that our model significantly outperforms Filter Bank Common Spatial Pattern (FBCSP), Riemannian Geometry (RG) approaches, and ShallowConvNet (p < 0.05).

CONCLUSIONS

The obtained results prove that motor imagery decoding is possible without handcrafted features.

摘要

背景

脑机接口(BCI)通过解码脑电波来指挥各种目的的机器,使人类能够与机器进行交互。卷积神经网络(ConvNet)通过端到端的方法提高了运动想象解码的最新水平。然而,浅层 ConvNet 通常比深层 ConvNet 表现更好。因此,我们旨在设计一种新的 ConvNet,它比现有模型更深,在性能上有所提高,并且具有最佳的复杂性。

新方法

我们开发了一种基于 Inception 和 Xception 架构的 ConvNet,该网络使用卷积层来提取时间和空间特征。我们采用可分离卷积和深度卷积来实现更快、更有效的 ConvNet。然后,我们引入了一种新的块,灵感来自 Inception,以学习更丰富的特征,提高分类性能。

结果

得到的结果可与其他最新技术相媲美。此外,卷积层的权重为我们提供了一些关于学习特征的见解,并揭示了最相关的特征。

与现有方法的比较

我们表明,我们的模型明显优于滤波器组公共空间模式(FBCSP)、黎曼几何(RG)方法和浅层 ConvNet(p<0.05)。

结论

所得结果证明了无需手工制作特征即可进行运动想象解码。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验