Wang Zihan, Yu Junqi, Gao Jiahui, Bai Yang, Wan Zhijiang
School of Information Engineering, Nanchang University, Nanchang 330031, China.
School of Public Policy and Administration, Nanchang University, Nanchang 330031, China.
Brain Sci. 2024 Jul 10;14(7):688. doi: 10.3390/brainsci14070688.
Deep learning (DL) has been demonstrated to be a valuable tool for classifying state of disorders of consciousness (DOC) using EEG signals. However, the performance of the DL-based DOC state classification is often challenged by the limited size of EEG datasets. To overcome this issue, we introduce multiple open-source EEG datasets to increase data volume and train a novel multi-task pre-training Transformer model named MutaPT. Furthermore, we propose a cross-distribution self-supervised (CDS) pre-training strategy to enhance the model's generalization ability, addressing data distribution shifts across multiple datasets. An EEG dataset of DOC patients is used to validate the effectiveness of our methods for the task of classifying DOC states. Experimental results show the superiority of our MutaPT over several DL models for EEG classification.
深度学习(DL)已被证明是一种利用脑电图(EEG)信号对意识障碍(DOC)状态进行分类的有价值工具。然而,基于DL的DOC状态分类性能常常受到EEG数据集规模有限的挑战。为克服这一问题,我们引入多个开源EEG数据集以增加数据量,并训练了一种名为MutaPT的新型多任务预训练Transformer模型。此外,我们提出一种跨分布自监督(CDS)预训练策略,以增强模型的泛化能力,解决多个数据集之间的数据分布差异问题。一个DOC患者的EEG数据集被用于验证我们的方法在DOC状态分类任务中的有效性。实验结果表明,我们的MutaPT在EEG分类方面优于几个DL模型。