Zhang Dehao, Zhang Tao, Sun Haijiang, Tang Yanhui, Liu Qiaoyuan
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China.
University of Chinese Academy of Sciences, Beijing 100049, China.
Sensors (Basel). 2024 Nov 26;24(23):7549. doi: 10.3390/s24237549.
In terms of facial expressions, micro-expressions are more realistic than macro-expressions and provide more valuable information, which can be widely used in psychological counseling and clinical diagnosis. In the past few years, deep learning methods based on optical flow and Transformer have achieved excellent results in this field, but most of the current algorithms are mainly concentrated on establishing a serialized token through the self-attention model, and they do not take into account the spatial relationship between facial landmarks. For the locality and changes in the micro-facial conditions themselves, we propose the deep learning model MCCA-VNET on the basis of Transformer. We effectively extract the changing features as the input of the model, fusing channel attention and spatial attention into Vision Transformer to capture correlations between features in different dimensions, which enhances the accuracy of the identification of micro-expressions. In order to verify the effectiveness of the algorithm mentioned, we conduct experimental testing in the SAMM, CAS (ME) II, and SMIC datasets and compared the results with other former best algorithms. Our algorithms can improve the UF1 score and UAR score to, respectively, 0.8676 and 0.8622 for the composite dataset, and they are better than other algorithms on multiple indicators, achieving the best comprehensive performance.
在面部表情方面,微表情比宏表情更真实,能提供更有价值的信息,可广泛应用于心理咨询和临床诊断。在过去几年中,基于光流和Transformer的深度学习方法在该领域取得了优异成果,但目前大多数算法主要集中于通过自注意力模型建立序列化令牌,并未考虑面部关键点之间的空间关系。针对微面部状况本身的局部性和变化,我们在Transformer的基础上提出了深度学习模型MCCA-VNET。我们有效地提取变化特征作为模型输入,将通道注意力和空间注意力融合到视觉Transformer中,以捕捉不同维度特征之间的相关性,从而提高微表情识别的准确率。为验证上述算法的有效性,我们在SAMM、CAS(ME)II和SMIC数据集中进行了实验测试,并将结果与其他先前的最佳算法进行比较。对于综合数据集,我们的算法可将UF1分数和UAR分数分别提高到0.8676和0.8622,在多个指标上优于其他算法,实现了最佳综合性能。