• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

仅通过一个CLIP实现GAN的一次性适应。

One-Shot Adaptation of GAN in Just One CLIP.

作者信息

Kwon Gihyun, Ye Jong Chul

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12179-12191. doi: 10.1109/TPAMI.2023.3283551. Epub 2023 Sep 5.

DOI:10.1109/TPAMI.2023.3283551
PMID:37352089
Abstract

There are many recent research efforts to fine-tune a pre-trained generator with a few target images to generate images of a novel domain. Unfortunately, these methods often suffer from overfitting or under-fitting when fine-tuned with a single target image. To address this, here we present a novel single-shot GAN adaptation method through unified CLIP space manipulations. Specifically, our model employs a two-step training strategy: reference image search in the source generator using a CLIP-guided latent optimization, followed by generator fine-tuning with a novel loss function that imposes CLIP space consistency between the source and adapted generators. To further improve the adapted model to produce spatially consistent samples with respect to the source generator, we also propose contrastive regularization for patchwise relationships in the CLIP space. Experimental results show that our model generates diverse outputs with the target texture and outperforms the baseline models both qualitatively and quantitatively. Furthermore, we show that our CLIP space manipulation strategy allows more effective attribute editing.

摘要

最近有许多研究致力于使用少量目标图像对预训练生成器进行微调,以生成新领域的图像。不幸的是,当使用单个目标图像进行微调时,这些方法常常会出现过拟合或欠拟合的问题。为了解决这个问题,我们在此提出一种通过统一的CLIP空间操作实现的新颖单阶段GAN适应方法。具体而言,我们的模型采用了两步训练策略:使用CLIP引导的潜在优化在源生成器中搜索参考图像,然后使用一种新颖的损失函数对生成器进行微调,该损失函数在源生成器和适应后的生成器之间强制实现CLIP空间一致性。为了进一步改进适应后的模型,使其相对于源生成器生成空间一致的样本,我们还针对CLIP空间中的逐块关系提出了对比正则化方法。实验结果表明,我们的模型生成了具有目标纹理的多样化输出,并且在定性和定量方面均优于基线模型。此外,我们表明我们的CLIP空间操作策略允许进行更有效的属性编辑。

相似文献

1
One-Shot Adaptation of GAN in Just One CLIP.仅通过一个CLIP实现GAN的一次性适应。
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12179-12191. doi: 10.1109/TPAMI.2023.3283551. Epub 2023 Sep 5.
2
Proto-Adapter: Efficient Training-Free CLIP-Adapter for Few-Shot Image Classification.Proto-Adapter:用于少样本图像分类的高效无需训练的CLIP-Adapter
Sensors (Basel). 2024 Jun 4;24(11):3624. doi: 10.3390/s24113624.
3
High-Quality and Diverse Few-Shot Image Generation via Masked Discrimination.通过掩码判别实现高质量和多样化的少样本图像生成
IEEE Trans Image Process. 2024;33:2950-2965. doi: 10.1109/TIP.2024.3385295. Epub 2024 Apr 22.
4
FEditNet++: Few-Shot Editing of Latent Semantics in GAN Spaces With Correlated Attribute Disentanglement.FEditNet++:基于相关属性解缠的生成对抗网络空间中潜在语义的少样本编辑
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9975-9990. doi: 10.1109/TPAMI.2024.3432529. Epub 2024 Nov 6.
5
Enhancing Few-Shot CLIP With Semantic-Aware Fine-Tuning.通过语义感知微调增强少样本CLIP
IEEE Trans Neural Netw Learn Syst. 2024 Aug 26;PP. doi: 10.1109/TNNLS.2024.3443394.
6
CLIP knows image aesthetics.CLIP了解图像美学。
Front Artif Intell. 2022 Nov 25;5:976235. doi: 10.3389/frai.2022.976235. eCollection 2022.
7
Multilevel structure-preserved GAN for domain adaptation in intravascular ultrasound analysis.用于血管内超声分析中域自适应的多层次结构保持 GAN。
Med Image Anal. 2022 Nov;82:102614. doi: 10.1016/j.media.2022.102614. Epub 2022 Sep 6.
8
Building an Open-Vocabulary Video CLIP Model With Better Architectures, Optimization and Data.构建具有更好架构、优化和数据的开放词汇视频CLIP模型。
IEEE Trans Pattern Anal Mach Intell. 2024 Jul;46(7):4747-4762. doi: 10.1109/TPAMI.2024.3357503. Epub 2024 Jun 5.
9
Feature Alignment by Uncertainty and Self-Training for Source-Free Unsupervised Domain Adaptation.基于不确定性和自训练的特征对齐用于无源无监督域适应
Neural Netw. 2023 Apr;161:682-692. doi: 10.1016/j.neunet.2023.02.009. Epub 2023 Feb 10.
10
Low-Data Drug Design with Few-Shot Generative Domain Adaptation.基于少样本生成域适应的低数据药物设计
Bioengineering (Basel). 2023 Sep 21;10(9):1104. doi: 10.3390/bioengineering10091104.