Siddique Muhammad Tariq, Venkat Ibrahim, Farooq Humera, Tajuddin Sharul, Newaz S H Shah
Department of Computer Sciences, Bahria University, Karachi, Pakistan.
School of Computing and Informatics, Universiti Teknologi Brunei, Jalan Tungku Link Gadong, Brunei-Muara, Brunei Darussalam.
PLoS One. 2025 May 22;20(5):e0322638. doi: 10.1371/journal.pone.0322638. eCollection 2025.
As challenging as it is to use face recognition with a Single Sample Per Person, it becomes even more difficult when face recognition based on a single sample is performed in an unconstrained environment. The unconstrained environment is normally considered irregular in facial expressions, pose, occlusion, and illumination. This degree of difficulty increases as a result of the single sample and in the presence of occlusion. Extensive research has been done on face recognition under pose and expression changes. Comparatively, less research has been reported on the occlusion problem that occurs in facial images. Occlusion may alter the appearance of facial images and cause deterioration in recognition. A robust method is required to handle the occlusion in the face image to improve the recognition performance. This study aimed to implement an effective augmentation technique that improves the performance of the Single Sample Per Person face recognition system in unconstrained environments. Virtual samples were created to expand the sample size to address the problem of a single sample. A local region-based technique was proposed to deal with occlusion by creating virtual samples. A deep neural network-based model, FaceNet, was used to extract the features and a support vector machine was used for classification. The performance of the proposed approach was evaluated, demonstrating its superiority in handling occlusion compared to that of its state-of-the-art counterparts. The proposed method achieved significant accuracy improvements, specifically 94.83% for the occlusion with sunglasses and 98% for the occlusion with scarves in the AR dataset.
尽管在每人单一样本的情况下使用人脸识别具有挑战性,但在无约束环境中基于单一样本进行人脸识别时,难度会变得更大。无约束环境通常被认为在面部表情、姿势、遮挡和光照方面不规则。由于单一样本以及存在遮挡的情况,这种难度程度会增加。针对姿势和表情变化下的人脸识别已经进行了广泛研究。相比之下,关于面部图像中出现的遮挡问题的研究报道较少。遮挡可能会改变面部图像的外观并导致识别性能下降。需要一种强大的方法来处理面部图像中的遮挡,以提高识别性能。本研究旨在实现一种有效的增强技术,以提高无约束环境下单人单样本人脸识别系统的性能。通过创建虚拟样本扩大样本量,以解决单一样本的问题。提出了一种基于局部区域的技术,通过创建虚拟样本处理遮挡。使用基于深度神经网络的模型FaceNet提取特征,并使用支持向量机进行分类。对所提方法的性能进行了评估,结果表明与现有同类方法相比,该方法在处理遮挡方面具有优越性。所提方法实现了显著的准确率提升,在AR数据集中,对于戴墨镜遮挡的情况准确率达到94.83%,对于戴围巾遮挡的情况准确率达到98%。