Suppr超能文献

基于CNN-KAN-[公式:见原文]模型的稀疏脑电图通道情感识别及跨主体建模研究。

Research on emotion recognition using sparse EEG channels and cross-subject modeling based on CNN-KAN-[Formula: see text] model.

作者信息

Xiong Fan, Fan Mengzhao, Yang Xu, Wang Chenxiao, Zhou Jinli

机构信息

Zhongyuan University of Technology, Zhengzhou, China.

Shengda Economics Trade and Management College of Zhengzhou, Zhengzhou, China.

出版信息

PLoS One. 2025 May 27;20(5):e0322583. doi: 10.1371/journal.pone.0322583. eCollection 2025.

Abstract

Emotion recognition plays a significant role in artificial intelligence and human-computer interaction. Electroencephalography (EEG) signals, due to their ability to directly reflect brain activity, have become an essential tool in emotion recognition research. However, the low dimensionality of sparse EEG channel data presents a key challenge in extracting effective features. This paper proposes a sparse channel EEG-based emotion recognition method using the CNN-KAN-[Formula: see text] network to address the challenges of limited feature extraction and cross-subject variability in emotion recognition. Through a feature mapping strategy, this method maps features such as Differential Entropy (DE), Power Spectral Density (PSD), and Emotion Valence Index (EVI) - Asymmetry Index (ASI) to pseudo-RGB images, effectively integrating both frequency-domain and spatial information from sparse channels, providing multi-dimensional input for CNN feature extraction. By combining the KAN module with a fast Fourier transform-based [Formula: see text] attention mechanism, the model can effectively fuse frequency-domain and spatial features for accurate classification of complex emotional signals. Experimental results show that the CNN-KAN-[Formula: see text] model performs comparably to multi-channel models while only using four EEG channels. Through training based on short-time segments, the model effectively reduces the impact of individual differences, significantly improving generalization ability in cross-subject emotion recognition tasks. Extensive experiments on the SEED and DEAP datasets demonstrate the proposed method's superior performance in emotion classification tasks. In the merged dataset experiments, the accuracy of the SEED three-class task reached 97.985%, while the accuracy for the DEAP four-class task was 91.718%. In the subject-dependent experiment, the average accuracy for the SEED three-class task was 97.45%, and for the DEAP four-class task, it was 89.16%.

摘要

情绪识别在人工智能和人机交互中发挥着重要作用。脑电图(EEG)信号因其能够直接反映大脑活动,已成为情绪识别研究中的重要工具。然而,稀疏EEG通道数据的低维度在提取有效特征方面构成了关键挑战。本文提出了一种基于稀疏通道EEG的情绪识别方法,使用CNN-KAN-[公式:见原文]网络来应对情绪识别中特征提取有限和跨主体变异性的挑战。通过特征映射策略,该方法将诸如微分熵(DE)、功率谱密度(PSD)以及情绪效价指数(EVI)-不对称指数(ASI)等特征映射到伪RGB图像,有效整合了稀疏通道的频域和空间信息,为CNN特征提取提供了多维度输入。通过将KAN模块与基于快速傅里叶变换的[公式:见原文]注意力机制相结合,该模型能够有效融合频域和空间特征,以准确分类复杂的情绪信号。实验结果表明,CNN-KAN-[公式:见原文]模型在仅使用四个EEG通道时的表现与多通道模型相当。通过基于短时间段的训练,该模型有效降低了个体差异的影响,显著提高了跨主体情绪识别任务中的泛化能力。在SEED和DEAP数据集上的大量实验证明了该方法在情绪分类任务中的优越性能。在合并数据集实验中,SEED三类任务的准确率达到97.985%,而DEAP四类任务的准确率为91.718%。在主体依赖实验中,SEED三类任务的平均准确率为97.45%,DEAP四类任务的平均准确率为89.16%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/adff/12111688/587e009257b4/pone.0322583.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验