Daoui Achraf, Yamni Mohamed, Altameem Torki, Ahmad Musheer, Hammad Mohamed, Pławiak Paweł, Tadeusiewicz Ryszard, A Abd El-Latif Ahmed
National School of Applied Sciences, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco.
Dhar El Mahrez Faculty of Science, Sidi Mohamed Ben Abdellah University, Fez 30000, Morocco.
Sensors (Basel). 2023 Nov 3;23(21):8957. doi: 10.3390/s23218957.
Color face images are often transmitted over public channels, where they are vulnerable to tampering attacks. To address this problem, the present paper introduces a novel scheme called Authentication and Color Face Self-Recovery (AuCFSR) for ensuring the authenticity of color face images and recovering the tampered areas in these images. AuCFSR uses a new two-dimensional hyperchaotic system called two-dimensional modular sine-cosine map (2D MSCM) to embed authentication and recovery data into the least significant bits of color image pixels. This produces high-quality output images with high security level. When tampered color face image is detected, AuCFSR executes two deep learning models: the CodeFormer model to enhance the visual quality of the recovered color face image and the DeOldify model to improve the colorization of this image. Experimental results demonstrate that AuCFSR outperforms recent similar schemes in tamper detection accuracy, security level, and visual quality of the recovered images.
彩色人脸图像通常通过公共信道传输,在这些信道中它们容易受到篡改攻击。为了解决这个问题,本文引入了一种名为认证与彩色人脸自恢复(AuCFSR)的新颖方案,以确保彩色人脸图像的真实性并恢复这些图像中的篡改区域。AuCFSR使用一种名为二维模块化正弦余弦映射(2D MSCM)的新型二维超混沌系统,将认证和恢复数据嵌入到彩色图像像素最低有效位中。这会生成具有高安全级别的高质量输出图像。当检测到被篡改的彩色人脸图像时,AuCFSR会执行两个深度学习模型:用于增强恢复后的彩色人脸图像视觉质量的CodeFormer模型和用于改善该图像色彩还原的DeOldify模型。实验结果表明,AuCFSR在篡改检测准确率、安全级别和恢复图像的视觉质量方面优于近期类似方案。