Psychology Department, University of Aston, Birmingham B4 7ET, UK.
Department of Biomedical Engineering, University of Bonab, Bonab 55517-61167, Iran.
Sensors (Basel). 2024 Sep 10;24(18):5883. doi: 10.3390/s24185883.
Emotion is a complex state caused by the functioning of the human brain in relation to various events, for which there is no scientific definition. Emotion recognition is traditionally conducted by psychologists and experts based on facial expressions-the traditional way to recognize something limited and is associated with errors. This study presents a new automatic method using electroencephalogram (EEG) signals based on combining graph theory with convolutional networks for emotion recognition. In the proposed model, firstly, a comprehensive database based on musical stimuli is provided to induce two and three emotional classes, including positive, negative, and neutral emotions. Generative adversarial networks (GANs) are used to supplement the recorded data, which are then input into the suggested deep network for feature extraction and classification. The suggested deep network can extract the dynamic information from the EEG data in an optimal manner and has 4 GConv layers. The accuracy of the categorization for two classes and three classes, respectively, is 99% and 98%, according to the suggested strategy. The suggested model has been compared with recent research and algorithms and has provided promising results. The proposed method can be used to complete the brain-computer-interface (BCI) systems puzzle.
情绪是一种由人类大脑在与各种事件相关联的作用下产生的复杂状态,目前还没有科学的定义。情绪识别传统上是由心理学家和专家根据面部表情进行的,这是一种传统的有限的识别方式,并且存在误差。本研究提出了一种新的基于脑电图(EEG)信号的自动识别方法,该方法基于结合图论和卷积网络的方法来进行情绪识别。在所提出的模型中,首先,提供了一个基于音乐刺激的综合数据库,以产生两种和三种情绪类别,包括积极、消极和中性情绪。生成对抗网络(GAN)用于补充记录的数据,然后将其输入到建议的深度网络中进行特征提取和分类。所提出的深度网络可以以最优的方式从 EEG 数据中提取动态信息,并且具有 4 个 GConv 层。根据所提出的策略,对于两种和三种类别的分类精度分别为 99%和 98%。所提出的模型与最近的研究和算法进行了比较,并提供了有希望的结果。该方法可用于完成脑机接口(BCI)系统的拼图。