Cheng Shichao, Wang Yifan, Mei Jiawei, Lin Guang, Zhang Jianhai, Kong Wanzeng
School of Computer Science, Hangzhou Dianzi University, Hangzhou, 310018 China.
Key Research and Development Project of Zhejiang Province, Hangzhou, 310018 China.
Cogn Neurodyn. 2025 Dec;19(1):84. doi: 10.1007/s11571-025-10272-8. Epub 2025 Jun 3.
Electroencephalogram (EEG)-based emotion recognition has received increasing attention in affective computing. Due to the non-stationary and non-linear characteristics of EEG signals, EEG data exhibit significant individual differences. Previous studies have adopted domain adaptation strategies to minimize the distribution gap between individuals and achieved reasonable results. However, due to ignoring the influence of individual-dependent background signals on task-dependent emotional signals, most of the research can only align source domain data and target domain data spatially as a whole. There may be confusion between categories. Based on this limitation, this paper proposes a conditional probabilistic-based domain adversarial network (CPDAN) for cross-subject EEG-based emotion recognition. According to the characteristics of cross-subject EEG signals, CPDAN uses different branch networks to separate the background features and task features from EEG signals. In addition, CPDAN uses domain-adversarial training to model the discrepancy in the global domain and local domain to reduce the intra-class distance and enlarge the inter-class distance. The extensive experiments on SEED and SEED-IV demonstrate that our proposed CPDAN framework outperforms the comparison methods. Especially on SEED-IV, the average accuracy of CPDAN has improved by 22% over the comparison method.
基于脑电图(EEG)的情感识别在情感计算中受到了越来越多的关注。由于EEG信号具有非平稳和非线性特性,EEG数据表现出显著的个体差异。以往的研究采用域适应策略来最小化个体之间的分布差异,并取得了合理的结果。然而,由于忽略了个体相关背景信号对任务相关情感信号的影响,大多数研究只能将源域数据和目标域数据作为一个整体在空间上进行对齐。类别之间可能会出现混淆。基于这一局限性,本文提出了一种基于条件概率的域对抗网络(CPDAN),用于基于跨个体EEG的情感识别。根据跨个体EEG信号的特点,CPDAN使用不同的分支网络从EEG信号中分离出背景特征和任务特征。此外,CPDAN使用域对抗训练对全局域和局部域的差异进行建模,以减小类内距离并扩大类间距离。在SEED和SEED-IV上进行的大量实验表明,我们提出的CPDAN框架优于比较方法。特别是在SEED-IV上,CPDAN的平均准确率比比较方法提高了22%。