National Engineering Research Center of Educational Big Data, Central China Normal University, Wuhan 430079, China.
Sensors (Basel). 2023 May 30;23(11):5201. doi: 10.3390/s23115201.
Facial expression recognition (FER) has received increasing attention. However, multiple factors (e.g., uneven illumination, facial deflection, occlusion, and subjectivity of annotations in image datasets) probably reduce the performance of traditional FER methods. Thus, we propose a novel Hybrid Domain Consistency Network (HDCNet) based on a feature constraint method that combines both spatial domain consistency and channel domain consistency. Specifically, first, the proposed HDCNet mines the potential attention consistency feature expression (different from manual features, e.g., HOG and SIFT) as effective supervision information by comparing the original sample image with the augmented facial expression image. Second, HDCNet extracts facial expression-related features in the spatial and channel domains, and then it constrains the consistent expression of features through the mixed domain consistency loss function. In addition, the loss function based on the attention-consistency constraints does not require additional labels. Third, the network weights are learned to optimize the classification network through the loss function of the mixed domain consistency constraints. Finally, experiments conducted on the public RAF-DB and AffectNet benchmark datasets verify that the proposed HDCNet improved classification accuracy by 0.3-3.84% compared to the existing methods.
面部表情识别 (FER) 受到了越来越多的关注。然而,多种因素(例如,光照不均匀、面部偏斜、遮挡和图像数据集注释的主观性)可能会降低传统 FER 方法的性能。因此,我们提出了一种基于特征约束方法的新型混合域一致性网络 (HDCNet),它结合了空间域一致性和通道域一致性。具体来说,首先,所提出的 HDCNet 通过比较原始样本图像和增强的面部表情图像来挖掘潜在的注意力一致性特征表达(与手动特征不同,例如 HOG 和 SIFT)作为有效的监督信息。其次,HDCNet 在空间和通道域中提取与面部表情相关的特征,然后通过混合域一致性损失函数约束特征的一致表达。此外,基于注意力一致性约束的损失函数不需要额外的标签。第三,通过混合域一致性约束的损失函数学习网络权重,以优化分类网络。最后,在公共 RAF-DB 和 AffectNet 基准数据集上进行的实验验证了与现有方法相比,所提出的 HDCNet 提高了 0.3-3.84%的分类准确率。