Suppr超能文献

用于面部微表情识别的多模态注意力动态融合网络

Multimodal Attention Dynamic Fusion Network for Facial Micro-Expression Recognition.

作者信息

Yang Hongling, Xie Lun, Pan Hang, Li Chiqin, Wang Zhiliang, Zhong Jialiang

机构信息

Department of Computer Science, Changzhi University, Changzhi 046011, China.

School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China.

出版信息

Entropy (Basel). 2023 Aug 22;25(9):1246. doi: 10.3390/e25091246.

Abstract

The emotional changes in facial micro-expressions are combinations of action units. The researchers have revealed that action units can be used as additional auxiliary data to improve facial micro-expression recognition. Most of the researchers attempt to fuse image features and action unit information. However, these works ignore the impact of action units on the facial image feature extraction process. Therefore, this paper proposes a local detail feature enhancement model based on a multimodal dynamic attention fusion network (MADFN) method for micro-expression recognition. This method uses a masked autoencoder based on learnable class tokens to remove local areas with low emotional expression ability in micro-expression images. Then, we utilize the action unit dynamic fusion module to fuse action unit representation to improve the potential representation ability of image features. The state-of-the-art performance of our proposed model is evaluated and verified on SMIC, CASME II, SAMM, and their combined 3DB-Combined datasets. The experimental results demonstrated that the proposed model achieved competitive performance with accuracy rates of 81.71%, 82.11%, and 77.21% on SMIC, CASME II, and SAMM datasets, respectively, that show the MADFN model can help to improve the discrimination of facial image emotional features.

摘要

面部微表情中的情绪变化是动作单元的组合。研究人员已经表明,动作单元可以用作额外的辅助数据来提高面部微表情识别能力。大多数研究人员试图融合图像特征和动作单元信息。然而,这些工作忽略了动作单元对面部图像特征提取过程的影响。因此,本文提出了一种基于多模态动态注意力融合网络(MADFN)方法的局部细节特征增强模型用于微表情识别。该方法使用基于可学习类别令牌的掩码自动编码器去除微表情图像中情绪表达能力较低的局部区域。然后,我们利用动作单元动态融合模块融合动作单元表示,以提高图像特征的潜在表示能力。我们在SMIC、CASME II、SAMM及其组合的3DB-Combined数据集上评估并验证了我们提出的模型的最先进性能。实验结果表明,所提出的模型在SMIC、CASME II和SAMM数据集上分别取得了81.71%、82.11%和77.21%的准确率,具有竞争力,这表明MADFN模型有助于提高面部图像情绪特征的辨别力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f9c0/10528512/de5ad6ac4d0d/entropy-25-01246-g001.jpg

相似文献

1
Multimodal Attention Dynamic Fusion Network for Facial Micro-Expression Recognition.
Entropy (Basel). 2023 Aug 22;25(9):1246. doi: 10.3390/e25091246.
2
Multi-scale fusion visual attention network for facial micro-expression recognition.
Front Neurosci. 2023 Jul 27;17:1216181. doi: 10.3389/fnins.2023.1216181. eCollection 2023.
3
Micro-expression recognition based on multi-scale 3D residual convolutional neural network.
Math Biosci Eng. 2024 Mar 1;21(4):5007-5031. doi: 10.3934/mbe.2024221.
4
Two-Level Spatio-Temporal Feature Fused Two-Stream Network for Micro-Expression Recognition.
Sensors (Basel). 2024 Feb 29;24(5):1574. doi: 10.3390/s24051574.
5
Decoupling facial motion features and identity features for micro-expression recognition.
PeerJ Comput Sci. 2022 Nov 14;8:e1140. doi: 10.7717/peerj-cs.1140. eCollection 2022.
6
LEARNet: Dynamic Imaging Network for Micro Expression Recognition.
IEEE Trans Image Process. 2019 Sep 19. doi: 10.1109/TIP.2019.2912358.
7
Joint Local and Global Information Learning With Single Apex Frame Detection for Micro-Expression Recognition.
IEEE Trans Image Process. 2021;30:249-263. doi: 10.1109/TIP.2020.3035042. Epub 2020 Nov 18.
8
Dual-ATME: Dual-Branch Attention Network for Micro-Expression Recognition.
Entropy (Basel). 2023 Mar 6;25(3):460. doi: 10.3390/e25030460.
9
Lightweight ViT Model for Micro-Expression Recognition Enhanced by Transfer Learning.
Front Neurorobot. 2022 Jun 30;16:922761. doi: 10.3389/fnbot.2022.922761. eCollection 2022.
10
Facial micro-expression recognition based on motion magnification network and graph attention mechanism.
Heliyon. 2024 Aug 12;10(16):e35964. doi: 10.1016/j.heliyon.2024.e35964. eCollection 2024 Aug 30.

引用本文的文献

1
A Survey of Deep Learning-Based Multimodal Emotion Recognition: Speech, Text, and Face.
Entropy (Basel). 2023 Oct 12;25(10):1440. doi: 10.3390/e25101440.

本文引用的文献

1
Joint Local and Global Information Learning With Single Apex Frame Detection for Micro-Expression Recognition.
IEEE Trans Image Process. 2021;30:249-263. doi: 10.1109/TIP.2020.3035042. Epub 2020 Nov 18.
2
Revealing the Invisible with Model and Data Shrinking for Composite-database Micro-expression Recognition.
IEEE Trans Image Process. 2020 Aug 26;PP. doi: 10.1109/TIP.2020.3018222.
3
Multimodal Language Processing in Human Communication.
Trends Cogn Sci. 2019 Aug;23(8):639-652. doi: 10.1016/j.tics.2019.05.006. Epub 2019 Jun 21.
4
Micro-Expression Recognition Using Color Spaces.
IEEE Trans Image Process. 2015 Dec;24(12):6034-47. doi: 10.1109/TIP.2015.2496314. Epub 2015 Oct 30.
5
CASME II: an improved spontaneous micro-expression database and the baseline evaluation.
PLoS One. 2014 Jan 27;9(1):e86041. doi: 10.1371/journal.pone.0086041. eCollection 2014.
6
Police lie detection accuracy: the effect of lie scenario.
Law Hum Behav. 2009 Dec;33(6):530-8. doi: 10.1007/s10979-008-9166-4. Epub 2009 Feb 26.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验