Institute for Digital Technologies, Loughborough University London, London E20 3BS, UK.
Sensors (Basel). 2020 Apr 4;20(7):2034. doi: 10.3390/s20072034.
The electroencephalogram (EEG) has great attraction in emotion recognition studies due to its resistance to deceptive actions of humans. This is one of the most significant advantages of brain signals in comparison to visual or speech signals in the emotion recognition context. A major challenge in EEG-based emotion recognition is that EEG recordings exhibit varying distributions for different people as well as for the same person at different time instances. This nonstationary nature of EEG limits the accuracy of it when subject independency is the priority. The aim of this study is to increase the subject-independent recognition accuracy by exploiting pretrained state-of-the-art Convolutional Neural Network (CNN) architectures. Unlike similar studies that extract spectral band power features from the EEG readings, raw EEG data is used in our study after applying windowing, pre-adjustments and normalization. Removing manual feature extraction from the training system overcomes the risk of eliminating hidden features in the raw data and helps leverage the deep neural network's power in uncovering unknown features. To improve the classification accuracy further, a median filter is used to eliminate the false detections along a prediction interval of emotions. This method yields a mean cross-subject accuracy of 86.56% and 78.34% on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED) for two and three emotion classes, respectively. It also yields a mean cross-subject accuracy of 72.81% on the Database for Emotion Analysis using Physiological Signals (DEAP) and 81.8% on the Loughborough University Multimodal Emotion Dataset (LUMED) for two emotion classes. Furthermore, the recognition model that has been trained using the SEED dataset was tested with the DEAP dataset, which yields a mean prediction accuracy of 58.1% across all subjects and emotion classes. Results show that in terms of classification accuracy, the proposed approach is superior to, or on par with, the reference subject-independent EEG emotion recognition studies identified in literature and has limited complexity due to the elimination of the need for feature extraction.
脑电图(EEG)在情感识别研究中具有很大的吸引力,因为它能够抵抗人类的欺骗行为。这是脑信号相对于视觉或语音信号在情感识别背景下的最大优势之一。基于脑电图的情感识别的一个主要挑战是,脑电图记录对于不同的人以及同一人在不同的时间实例表现出不同的分布。这种脑电图的非平稳性限制了其在强调个体独立性时的准确性。本研究的目的是通过利用预先训练的最先进卷积神经网络(CNN)架构来提高个体独立性识别的准确性。与从脑电图读数中提取频谱带功率特征的类似研究不同,我们的研究使用原始 EEG 数据,在应用窗口化、预调整和归一化后使用。从训练系统中去除手动特征提取克服了消除原始数据中隐藏特征的风险,并有助于利用深度神经网络的力量来揭示未知特征。为了进一步提高分类准确性,使用中值滤波器消除情绪预测区间内的假检测。该方法在上海交通大学情感 EEG 数据集(SEED)上分别对两类和三类情绪的交叉个体准确率为 86.56%和 78.34%,在生理信号情感分析数据库(DEAP)上的准确率为 72.81%,在拉夫堡大学多模态情感数据集(LUMED)上的准确率为 81.8%。此外,使用 SEED 数据集训练的识别模型在 DEAP 数据集上进行了测试,在所有受试者和情绪类别中,平均预测准确率为 58.1%。结果表明,在分类准确性方面,该方法优于或与文献中确定的独立于个体的 EEG 情感识别研究相当,并且由于消除了特征提取的需要,复杂性有限。