Suppr超能文献

基于类自动编码器神经网络的多通道脑电图情感识别潜在因素解码

Latent Factor Decoding of Multi-Channel EEG for Emotion Recognition Through Autoencoder-Like Neural Networks.

作者信息

Li Xiang, Zhao Zhigang, Song Dawei, Zhang Yazhou, Pan Jingshan, Wu Lu, Huo Jidong, Niu Chunyang, Wang Di

机构信息

Key Laboratory of Medical Artificial Intelligence, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, China.

School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China.

出版信息

Front Neurosci. 2020 Mar 2;14:87. doi: 10.3389/fnins.2020.00087. eCollection 2020.

Abstract

Robust cross-subject emotion recognition based on multichannel EEG has always been hard work. In this work, we hypothesize that there exist default brain variables across subjects in emotional processes. Hence, the states of the latent variables that relate to emotional processing must contribute to building robust recognition models. Specifically, we propose to utilize an unsupervised deep generative model (e.g., variational autoencoder) to determine the latent factors from the multichannel EEG. Through a sequence modeling method, we examine the emotion recognition performance based on the learnt latent factors. The performance of the proposed methodology is verified on two public datasets (DEAP and SEED) and compared with traditional matrix factorization-based (ICA) and autoencoder-based approaches. Experimental results demonstrate that autoencoder-like neural networks are suitable for unsupervised EEG modeling, and our proposed emotion recognition framework achieves an inspiring performance. As far as we know, it is the first work that introduces variational autoencoder into multichannel EEG decoding for emotion recognition. We think the approach proposed in this work is not only feasible in emotion recognition but also promising in diagnosing depression, Alzheimer's disease, mild cognitive impairment, etc., whose specific latent processes may be altered or aberrant compared with the normal healthy control.

摘要

基于多通道脑电图的稳健跨主体情绪识别一直是一项艰巨的任务。在这项工作中,我们假设在情绪过程中存在跨主体的默认大脑变量。因此,与情绪处理相关的潜在变量状态必定有助于构建稳健的识别模型。具体而言,我们建议利用无监督深度生成模型(例如变分自编码器)从多通道脑电图中确定潜在因素。通过序列建模方法,我们基于学习到的潜在因素来检验情绪识别性能。所提出方法的性能在两个公共数据集(DEAP和SEED)上得到验证,并与传统的基于矩阵分解(ICA)和基于自动编码器的方法进行比较。实验结果表明,类似自动编码器的神经网络适用于无监督脑电图建模,并且我们提出的情绪识别框架取得了令人鼓舞的性能。据我们所知,这是第一项将变分自编码器引入多通道脑电图解码用于情绪识别的工作。我们认为这项工作中提出的方法不仅在情绪识别中可行,而且在诊断抑郁症、阿尔茨海默病、轻度认知障碍等方面也很有前景,与正常健康对照相比,这些疾病的特定潜在过程可能会发生改变或异常。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f5a/7061897/f2f964c41d75/fnins-14-00087-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验