Kim Da-Hyun, Shin Dong-Hee, Kam Tae-Eui
Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea.
Front Hum Neurosci. 2023 May 15;17:1194751. doi: 10.3389/fnhum.2023.1194751. eCollection 2023.
Brain-computer interfaces (BCIs) facilitate direct interaction between the human brain and computers, enabling individuals to control external devices through cognitive processes. Despite its potential, the problem of BCI illiteracy remains one of the major challenges due to inter-subject EEG variability, which hinders many users from effectively utilizing BCI systems. In this study, we propose a subject-to-subject semantic style transfer network (SSSTN) at the feature-level to address the BCI illiteracy problem in electroencephalogram (EEG)-based motor imagery (MI) classification tasks.
Our approach uses the continuous wavelet transform method to convert high-dimensional EEG data into images as input data. The SSSTN 1) trains a classifier for each subject, 2) transfers the distribution of class discrimination styles from the source subject (the best-performing subject for the classifier, i.e., BCI expert) to each subject of the target domain (the remaining subjects except the source subject, specifically BCI illiterates) through the proposed style loss, and applies a modified content loss to preserve the class-relevant semantic information of the target domain, and 3) finally merges the classifier predictions of both source and target subject using an ensemble technique.
We evaluate the proposed method on the BCI Competition IV-2a and IV-2b datasets and demonstrate improved classification performance over existing methods, especially for BCI illiterate users. The ablation experiments and t-SNE visualizations further highlight the effectiveness of the proposed method in achieving meaningful feature-level semantic style transfer.
脑机接口(BCI)促进了人脑与计算机之间的直接交互,使个体能够通过认知过程控制外部设备。尽管具有潜力,但由于个体间脑电图(EEG)的变异性,BCI文盲问题仍然是主要挑战之一,这阻碍了许多用户有效利用BCI系统。在本研究中,我们提出了一种基于特征级的个体到个体语义风格迁移网络(SSSTN),以解决基于脑电图(EEG)的运动想象(MI)分类任务中的BCI文盲问题。
我们的方法使用连续小波变换方法将高维EEG数据转换为图像作为输入数据。SSSTN 1)为每个个体训练一个分类器,2)通过提出的风格损失将分类判别风格的分布从源个体(分类器表现最佳的个体,即BCI专家)迁移到目标域的每个个体(除源个体外的其余个体,特别是BCI文盲),并应用修改后的内容损失来保留目标域与类别相关的语义信息,3)最后使用集成技术合并源个体和目标个体的分类器预测结果。
我们在BCI竞赛IV-2a和IV-2b数据集上评估了所提出的方法,并证明其分类性能优于现有方法,特别是对于BCI文盲用户。消融实验和t-SNE可视化进一步突出了所提出方法在实现有意义的特征级语义风格迁移方面的有效性。