Kwon Gihyun, Ye Jong Chul
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12179-12191. doi: 10.1109/TPAMI.2023.3283551. Epub 2023 Sep 5.
There are many recent research efforts to fine-tune a pre-trained generator with a few target images to generate images of a novel domain. Unfortunately, these methods often suffer from overfitting or under-fitting when fine-tuned with a single target image. To address this, here we present a novel single-shot GAN adaptation method through unified CLIP space manipulations. Specifically, our model employs a two-step training strategy: reference image search in the source generator using a CLIP-guided latent optimization, followed by generator fine-tuning with a novel loss function that imposes CLIP space consistency between the source and adapted generators. To further improve the adapted model to produce spatially consistent samples with respect to the source generator, we also propose contrastive regularization for patchwise relationships in the CLIP space. Experimental results show that our model generates diverse outputs with the target texture and outperforms the baseline models both qualitatively and quantitatively. Furthermore, we show that our CLIP space manipulation strategy allows more effective attribute editing.
最近有许多研究致力于使用少量目标图像对预训练生成器进行微调,以生成新领域的图像。不幸的是,当使用单个目标图像进行微调时,这些方法常常会出现过拟合或欠拟合的问题。为了解决这个问题,我们在此提出一种通过统一的CLIP空间操作实现的新颖单阶段GAN适应方法。具体而言,我们的模型采用了两步训练策略:使用CLIP引导的潜在优化在源生成器中搜索参考图像,然后使用一种新颖的损失函数对生成器进行微调,该损失函数在源生成器和适应后的生成器之间强制实现CLIP空间一致性。为了进一步改进适应后的模型,使其相对于源生成器生成空间一致的样本,我们还针对CLIP空间中的逐块关系提出了对比正则化方法。实验结果表明,我们的模型生成了具有目标纹理的多样化输出,并且在定性和定量方面均优于基线模型。此外,我们表明我们的CLIP空间操作策略允许进行更有效的属性编辑。