Suppr超能文献

基于深度卷积神经网络,利用健康受试者和阿尔茨海默病受试者的脑电图信号进行视觉刺激分类

Deep Convolutional Neural Network-Based Visual Stimuli Classification Using Electroencephalography Signals of Healthy and Alzheimer's Disease Subjects.

作者信息

Komolovaitė Dovilė, Maskeliūnas Rytis, Damaševičius Robertas

机构信息

Department of Multimedia Engineering, Kaunas University of Technology, 51368 Kaunas, Lithuania.

Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania.

出版信息

Life (Basel). 2022 Mar 4;12(3):374. doi: 10.3390/life12030374.

Abstract

Visual perception is an important part of human life. In the context of facial recognition, it allows us to distinguish between emotions and important facial features that distinguish one person from another. However, subjects suffering from memory loss face significant facial processing problems. If the perception of facial features is affected by memory impairment, then it is possible to classify visual stimuli using brain activity data from the visual processing regions of the brain. This study differentiates the aspects of familiarity and emotion by the inversion effect of the face and uses convolutional neural network (CNN) models (EEGNet, EEGNet SSVEP (steady-state visual evoked potentials), and DeepConvNet) to learn discriminative features from raw electroencephalography (EEG) signals. Due to the limited number of available EEG data samples, Generative Adversarial Networks (GAN) and Variational Autoencoders (VAE) are introduced to generate synthetic EEG signals. The generated data are used to pretrain the models, and the learned weights are initialized to train them on the real EEG data. We investigate minor facial characteristics in brain signals and the ability of deep CNN models to learn them. The effect of face inversion was studied, and it was observed that the N170 component has a considerable and sustained delay. As a result, emotional and familiarity stimuli were divided into two categories based on the posture of the face. The categories of upright and inverted stimuli have the smallest incidences of confusion. The model's ability to learn the face-inversion effect is demonstrated once more.

摘要

视觉感知是人类生活的重要组成部分。在面部识别的背景下,它使我们能够区分情绪以及区分不同人的重要面部特征。然而,患有失忆症的受试者面临严重的面部处理问题。如果面部特征的感知受到记忆损伤的影响,那么就有可能使用来自大脑视觉处理区域的脑活动数据对视觉刺激进行分类。本研究通过面部的倒置效应区分熟悉度和情绪方面,并使用卷积神经网络(CNN)模型(EEGNet、EEGNet SSVEP(稳态视觉诱发电位)和DeepConvNet)从原始脑电图(EEG)信号中学习判别特征。由于可用的EEG数据样本数量有限,引入了生成对抗网络(GAN)和变分自编码器(VAE)来生成合成EEG信号。生成的数据用于对模型进行预训练,并初始化学习到的权重以在真实EEG数据上对其进行训练。我们研究了脑信号中的细微面部特征以及深度CNN模型学习这些特征的能力。研究了面部倒置的影响,观察到N170成分有相当大且持续的延迟。结果,基于面部姿势将情绪和熟悉度刺激分为两类。直立和倒置刺激类别之间的混淆发生率最低。再次证明了该模型学习面部倒置效应的能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3148/8950142/d664e71a15f4/life-12-00374-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验