School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, People's Republic of China.
Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China.
J Neural Eng. 2023 Mar 13;20(2). doi: 10.1088/1741-2552/acbfdf.
. A motor imagery-based brain-computer interface (MI-BCI) translates spontaneous movement intention from the brain to outside devices. Multimodal MI-BCI that uses multiple neural signals contains rich common and complementary information and is promising for enhancing the decoding accuracy of MI-BCI. However, the heterogeneity of different modalities makes the multimodal decoding task difficult. How to effectively utilize multimodal information remains to be further studied.. In this study, a multimodal MI decoding neural network was proposed. Spatial feature alignment losses were designed to enhance the feature representations extracted from the heterogeneous data and guide the fusion of features from different modalities. An attention-based modality fusion module was built to align and fuse the features in the temporal dimension. To evaluate the proposed decoding method, a five-class MI electroencephalography (EEG) and functional near infrared spectroscopy (fNIRS) dataset were constructed.. The comparison experimental results showed that the proposed decoding method achieved higher decoding accuracy than the compared methods on both the self-collected dataset and a public dataset. The ablation results verified the effectiveness of each part of the proposed method. Feature distribution visualization results showed that the proposed losses enhance the feature representation of EEG and fNIRS modalities. The proposed method based on EEG and fNIRS modalities has significant potential for improving decoding performance of MI tasks.
基于运动想象的脑机接口(MI-BCI)将大脑的自发性运动意图转化为外部设备。使用多种神经信号的多模态 MI-BCI 包含丰富的公共和互补信息,有望提高 MI-BCI 的解码精度。然而,不同模态的异质性使得多模态解码任务变得困难。如何有效地利用多模态信息仍有待进一步研究。在这项研究中,提出了一种多模态 MI 解码神经网络。设计了空间特征对齐损失,以增强从异构数据中提取的特征表示,并指导来自不同模态的特征融合。构建了基于注意力的模态融合模块,以对齐和融合时间维度上的特征。为了评估所提出的解码方法,构建了一个五分类 MI 脑电图(EEG)和功能近红外光谱(fNIRS)数据集。对比实验结果表明,所提出的解码方法在自采集数据集和公共数据集上的比较方法均实现了更高的解码精度。消融实验结果验证了所提出方法各部分的有效性。特征分布可视化结果表明,所提出的损失增强了 EEG 和 fNIRS 模态的特征表示。基于 EEG 和 fNIRS 模态的所提出的方法具有显著提高 MI 任务解码性能的潜力。