Su Wanchao, Wang Can, Liu Chen, Han Fangzhou, Fu Hongbo, Liao Jing
IEEE Trans Vis Comput Graph. 2024 Jul 24;PP. doi: 10.1109/TVCG.2024.3432910.
Creating fine-retouched portrait images is tedious and time-consuming even for professional artists. There exist automatic retouching methods, but they either suffer from over-smoothing artifacts or lack generalization ability. To address such issues, we present StyleRetoucher, a novel automatic portrait image retouching framework, leveraging StyleGAN's generation and generalization ability to improve an input portrait image's skin condition while preserving its facial details. Harnessing the priors of pretrained StyleGAN, our method shows superior robustness: a). performing stably with fewer training samples and b). generalizing well on the out-domain data. Moreover, by blending the spatial features of the input image and intermediate features of the StyleGAN layers, our method preserves the input characteristics to the largest extent. We further propose a novel blemish-aware feature selection mechanism to effectively identify and remove the skin blemishes, improving the image skin condition. Qualitative and quantitative evaluations validate the great generalization capability of our method. Further experiments show StyleRetoucher's superior performance to the alternative solutions in the image retouching task. We also conduct a user perceptive study to confirm the superior retouching performance of our method over the existing state-of-the-art alternatives.
即使对于专业艺术家来说,创建精细修饰的人像图像也是乏味且耗时的。虽然存在自动修饰方法,但它们要么存在过度平滑的伪影,要么缺乏泛化能力。为了解决这些问题,我们提出了StyleRetoucher,这是一种新颖的自动人像图像修饰框架,它利用StyleGAN的生成和泛化能力来改善输入人像图像的皮肤状况,同时保留其面部细节。利用预训练的StyleGAN的先验知识,我们的方法显示出卓越的鲁棒性:a). 在较少的训练样本下稳定运行;b). 在外域数据上具有良好的泛化能力。此外,通过融合输入图像的空间特征和StyleGAN层的中间特征,我们的方法最大程度地保留了输入特征。我们还提出了一种新颖的瑕疵感知特征选择机制,以有效地识别和去除皮肤瑕疵,改善图像的皮肤状况。定性和定量评估验证了我们方法强大的泛化能力。进一步的实验表明,在图像修饰任务中,StyleRetoucher比其他替代方案具有更优的性能。我们还进行了用户感知研究,以确认我们的方法比现有最先进的替代方案具有更优的修饰性能。