Suppr超能文献

MI-CAT:一种基于转换器的领域自适应网络,用于运动想象分类。

MI-CAT: A transformer-based domain adaptation network for motor imagery classification.

机构信息

Jilin University, College of Computer Science and Technology, Changchun, Jilin Province, China; Key Laboratory of Symbol Computation and Knowledge Engineering, Jilin University, Changchun 130012, China.

Xi'an Jiaotong University, College of Electronic information, Xi'an, Shanxi Province, China.

出版信息

Neural Netw. 2023 Aug;165:451-462. doi: 10.1016/j.neunet.2023.06.005. Epub 2023 Jun 7.

Abstract

Due to its convenience and safety, electroencephalography (EEG) data is one of the most widely used signals in motor imagery (MI) brain-computer interfaces (BCIs). In recent years, methods based on deep learning have been widely applied to the field of BCIs, and some studies have gradually tried to apply Transformer to EEG signal decoding due to its superior global information focusing ability. However, EEG signals vary from subject to subject. Based on Transformer, how to effectively use data from other subjects (source domain) to improve the classification performance of a single subject (target domain) remains a challenge. To fill this gap, we propose a novel architecture called MI-CAT. The architecture innovatively utilizes Transformer's self-attention and cross-attention mechanisms to interact features to resolve differential distribution between different domains. Specifically, we adopt a patch embedding layer for the extracted source and target features to divide the features into multiple patches. Then, we comprehensively focus on the intra-domain and inter-domain features by stacked multiple Cross-Transformer Blocks (CTBs), which can adaptively conduct bidirectional knowledge transfer and information exchange between domains. Furthermore, we also utilize two non-shared domain-based attention blocks to efficiently capture domain-dependent information, optimizing the features extracted from the source and target domains to assist in feature alignment. To evaluate our method, we conduct extensive experiments on two real public EEG datasets, Dataset IIb and Dataset IIa, achieving competitive performance with an average classification accuracy of 85.26% and 76.81%, respectively. Experimental results demonstrate that our method is a powerful model for decoding EEG signals and facilitates the development of the Transformer for brain-computer interfaces (BCIs).

摘要

由于其便利性和安全性,脑电图 (EEG) 数据是运动想象 (MI) 脑机接口 (BCI) 中使用最广泛的信号之一。近年来,基于深度学习的方法已被广泛应用于 BCI 领域,由于其具有优越的全局信息聚焦能力,一些研究已逐渐尝试将 Transformer 应用于 EEG 信号解码。然而,EEG 信号因个体而异。基于 Transformer,如何有效地利用来自其他个体(源域)的数据来提高单个个体(目标域)的分类性能仍然是一个挑战。为了填补这一空白,我们提出了一种名为 MI-CAT 的新架构。该架构创新性地利用 Transformer 的自注意力和交叉注意力机制来交互特征,以解决不同域之间的差异分布。具体来说,我们采用补丁嵌入层对提取的源和目标特征进行划分,将特征划分为多个补丁。然后,我们通过堆叠多个 Cross-Transformer 块(CTBs)全面关注域内和域间特征,这些块可以自适应地在域之间进行双向知识传递和信息交换。此外,我们还利用两个非共享基于域的注意力块来有效地捕获域相关信息,优化从源域和目标域提取的特征,以协助特征对齐。为了评估我们的方法,我们在两个真实的公共 EEG 数据集 Dataset IIb 和 Dataset IIa 上进行了广泛的实验,分别取得了平均分类准确率 85.26%和 76.81%的竞争性能。实验结果表明,我们的方法是一种强大的 EEG 信号解码模型,有助于 Transformer 在脑机接口 (BCI) 中的发展。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验