Du Kangning, Wang Zhen, Cao Lin, Guo Yanan, Tian Shu, Zhang Fan
School of Information and Communication Engineering, Beijing Information Science and Technology University, Key Laboratory of Information and Communication Systems, Ministry of Information Industry, Beijing, China.
PeerJ Comput Sci. 2024 Jul 31;10:e2184. doi: 10.7717/peerj-cs.2184. eCollection 2024.
Transforming optical facial images into sketches while preserving realism and facial features poses a significant challenge. The current methods that rely on paired training data are costly and resource-intensive. Furthermore, they often fail to capture the intricate features of faces, resulting in substandard sketch generation. To address these challenges, we propose the novel hierarchical contrast generative adversarial network (HCGAN). Firstly, HCGAN consists of a global sketch synthesis module that generates sketches with well-defined global features and a local sketch refinement module that enhances the ability to extract features in critical areas. Secondly, we introduce local refinement loss based on the local sketch refinement module, refining sketches at a granular level. Finally, we propose an association strategy called "warmup-epoch" and local consistency loss between the two modules to ensure HCGAN is effectively optimized. Evaluations of the CUFS and SKSF-A datasets demonstrate that our method produces high-quality sketches and outperforms existing state-of-the-art methods in terms of fidelity and realism. Compared to the current state-of-the-art methods, HCGAN reduces FID by 12.6941, 4.9124, and 9.0316 on three datasets of CUFS, respectively, and by 7.4679 on the SKSF-A dataset. Additionally, it obtained optimal scores for content fidelity (CF), global effects (GE), and local patterns (LP). The proposed HCGAN model provides a promising solution for realistic sketch synthesis under unpaired data training.
将光学面部图像转换为草图,同时保留真实感和面部特征,这是一项重大挑战。当前依赖配对训练数据的方法成本高昂且资源密集。此外,它们往往无法捕捉面部的复杂特征,导致草图生成质量不高。为应对这些挑战,我们提出了新颖的分层对比生成对抗网络(HCGAN)。首先,HCGAN由一个生成具有明确全局特征草图的全局草图合成模块和一个增强关键区域特征提取能力的局部草图细化模块组成。其次,我们基于局部草图细化模块引入局部细化损失,在细粒度级别上细化草图。最后,我们提出一种名为“热身轮次”的关联策略以及两个模块之间的局部一致性损失,以确保HCGAN得到有效优化。对CUFS和SKSF - A数据集的评估表明,我们的方法生成了高质量的草图,并且在逼真度和真实感方面优于现有的最先进方法。与当前的最先进方法相比,HCGAN在CUFS的三个数据集上分别将FID降低了12.6941、4.9124和9.0316,在SKSF - A数据集上降低了7.4679。此外,它在内容逼真度(CF)、全局效果(GE)和局部图案(LP)方面获得了最优分数。所提出的HCGAN模型为未配对数据训练下的逼真草图合成提供了一个有前景的解决方案。