IEEE J Biomed Health Inform. 2023 Oct;27(10):4758-4767. doi: 10.1109/JBHI.2023.3301993. Epub 2023 Oct 5.
Recently, electroencephalographic (EEG) emotion recognition attract attention in the field of human-computer interaction (HCI). However, most of the existing EEG emotion datasets primarily consist of data from normal human subjects. To enhance diversity, this study aims to collect EEG signals from 30 hearing-impaired subjects while they watch video clips displaying six different emotions (happiness, inspiration, neutral, anger, fear, and sadness). The frequency domain feature matrix of EEG signals, which comprise power spectral density (PSD) and differential entropy (DE), were up-sampled using cubic spline interpolation to capture the correlation among different channels. To select emotion representation information from both global and localized brain regions, a novel method called Shifted EEG Channel Transformer (SECT) was proposed. The SECT method consists of two layers: the first layer utilizes the traditional channel Transformer (CT) structure to process information from global brain regions, while the second layer acquires localized information from centrally symmetrical and reorganized brain regions by shifted channel Transformer (S-CT). We conducted a subject-dependent experiment, and the accuracy of the PSD and DE features reached 82.51% and 84.76%, respectively, for the six kinds of emotion classification. Moreover, subject-independent experiments were conducted on a public dataset, yielding accuracies of 85.43% (3-classification, SEED), 66.83% (2-classification on Valence, DEAP), and 65.31% (2-classification on Arouse, DEAP), respectively.
最近,脑电图(EEG)情绪识别在人机交互(HCI)领域引起了关注。然而,现有的大多数 EEG 情绪数据集主要由正常人类受试者的数据组成。为了增强多样性,本研究旨在从 30 名听力受损受试者收集 EEG 信号,同时观看显示六种不同情绪(快乐、灵感、中性、愤怒、恐惧和悲伤)的视频片段。使用三次样条插值对 EEG 信号的频域特征矩阵(包括功率谱密度(PSD)和差分熵(DE))进行上采样,以捕获不同通道之间的相关性。为了从全局和局部脑区选择情绪表示信息,提出了一种称为移位 EEG 通道转换器(SECT)的新方法。SECT 方法由两层组成:第一层利用传统的通道转换器(CT)结构处理来自全局脑区的信息,而第二层通过移位通道转换器(S-CT)从中心对称和重新组织的脑区获取局部信息。我们进行了基于受试者的实验,PSD 和 DE 特征的准确性分别达到了 82.51%和 84.76%,用于六种情绪分类。此外,我们还在公共数据集上进行了基于受试者的实验,准确率分别为 85.43%(SEED 的三分类)、66.83%(Valence 的二分类,DEAP)和 65.31%(Arouse 的二分类,DEAP)。