Perry Fordson Hayford, Xing Xiaofen, Guo Kailing, Xu Xiangmin
School of Electronic and Information Engineering, South China University of Technology, Guangzhou, China.
School of Future Technology, South China University of Technology, Guangzhou, China.
Front Neurosci. 2022 May 24;16:865201. doi: 10.3389/fnins.2022.865201. eCollection 2022.
Emotion recognition from affective brain-computer interfaces (aBCI) has garnered a lot of attention in human-computer interactions. Electroencephalographic (EEG) signals collected and stored in one database have been mostly used due to their ability to detect brain activities in real time and their reliability. Nevertheless, large EEG individual differences occur amongst subjects making it impossible for models to share information across. New labeled data is collected and trained separately for new subjects which costs a lot of time. Also, during EEG data collection across databases, different stimulation is introduced to subjects. Audio-visual stimulation (AVS) is commonly used in studying the emotional responses of subjects. In this article, we propose a brain region aware domain adaptation (BRADA) algorithm to treat features from auditory and visual brain regions differently, which effectively tackle subject-to-subject variations and mitigate distribution mismatch across databases. BRADA is a new framework that works with the existing transfer learning method. We apply BRADA to both cross-subject and cross-database settings. The experimental results indicate that our proposed transfer learning method can improve valence-arousal emotion recognition tasks.
情感脑机接口(aBCI)的情感识别在人机交互中备受关注。由于能够实时检测大脑活动及其可靠性,存储在一个数据库中的脑电图(EEG)信号被广泛使用。然而,不同受试者之间存在较大的脑电图个体差异,这使得模型无法跨个体共享信息。对于新的受试者,需要重新收集并单独训练新的标记数据,这耗费大量时间。此外,在跨数据库收集脑电图数据时,会对受试者引入不同的刺激。视听刺激(AVS)常用于研究受试者的情绪反应。在本文中,我们提出了一种脑区感知域适应(BRADA)算法,对来自听觉和视觉脑区的特征进行不同处理,有效解决个体间差异并减轻跨数据库的分布不匹配问题。BRADA是一个与现有迁移学习方法协同工作的新框架。我们将BRADA应用于跨个体和跨数据库场景。实验结果表明,我们提出的迁移学习方法能够改善效价-唤醒情绪识别任务。