Suppr超能文献

DCAlexNet:基于双面图像的用于微表情识别的深度耦合AlexNet

DCAlexNet: Deep coupled AlexNet for micro facial expression recognition based on double face images.

作者信息

Zhang Yinjun

机构信息

Guangxi Science and Technology Normal University, Laibin, China.

出版信息

Comput Biol Med. 2025 May;189:109986. doi: 10.1016/j.compbiomed.2025.109986. Epub 2025 Mar 12.

Abstract

Facial Micro-Expression Recognition (FER) presents challenges due to individual variations in emotional intensity and the complexity of feature extraction. While apex frames offer valuable emotional information, their precise role in FER remains unclear. Low-resolution facial images further degrade performance compared to high-resolution (HR) images. Existing methods, including super-resolution and convolutional neural networks, yield only moderate results. This work proposes a deep coupled AlexNet (DCAlexNet) model with a trunk network trained on multi-resolution images to extract discriminative features and a branch network for resolution-specific mapping between HR and low-resolution (LR) images. By integrating global and local facial information, DCAlexNet enhances micro-expression recognition while filtering irrelevant facial regions. The evaluations on FER2013, BU-3DFE, and Oulu-CASIA datasets demonstrate superior performance, achieving 98.3 % accuracy on FER2013, 97.2 % on BU-3DFE, and 96 % on Oulu-CASIA, with improved RMSE, RAE, and processing times.

摘要

由于情感强度的个体差异以及特征提取的复杂性,面部微表情识别(FER)面临诸多挑战。虽然峰值帧提供了有价值的情感信息,但其在FER中的确切作用仍不明确。与高分辨率(HR)图像相比,低分辨率面部图像会进一步降低性能。包括超分辨率和卷积神经网络在内的现有方法仅产生中等效果。这项工作提出了一种深度耦合AlexNet(DCAlexNet)模型,该模型具有一个在多分辨率图像上训练的主干网络,用于提取判别特征,以及一个用于HR和低分辨率(LR)图像之间特定分辨率映射的分支网络。通过整合全局和局部面部信息,DCAlexNet在过滤无关面部区域的同时增强了微表情识别。在FER2013、BU - 3DFE和奥卢 - 中国科学院自动化所(Oulu - CASIA)数据集上的评估显示出卓越性能,在FER2013上达到98.3%的准确率,在BU - 3DFE上为97.2%,在奥卢 - CASIA上为96%,同时均方根误差(RMSE)、相对绝对误差(RAE)和处理时间也有所改善。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验