• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过GAN先验蒸馏实现少样本面部风格化

Few-Shot Face Stylization via GAN Prior Distillation.

作者信息

Zhao Ruoyu, Zhu Mingrui, Wang Nannan, Gao Xinbo

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4492-4503. doi: 10.1109/TNNLS.2024.3377609. Epub 2025 Feb 28.

DOI:10.1109/TNNLS.2024.3377609
PMID:38536698
Abstract

Face stylization has made notable progress in recent years. However, when training on limited data, the performance of existing approaches significantly declines. Although some studies have attempted to tackle this problem, they either failed to achieve the few-shot setting (less than 10) or can only get suboptimal results. In this article, we propose GAN Prior Distillation (GPD) to enable effective few-shot face stylization. GPD contains two models: a teacher network with GAN Prior and a student network that fulfills end-to-end translation. Specifically, we adapt the teacher network trained on large-scale data in the source domain to the target domain using a handful of samples, where it can learn the target domain's knowledge. Then, we can achieve few-shot augmentation by generating source domain and target domain images simultaneously with the same latent codes. We propose an anchor-based knowledge distillation module that can fully use the difference between the training and the augmented data to distill the knowledge of the teacher network into the student network. The trained student network achieves excellent generalization performance with the absorption of additional knowledge. Qualitative and quantitative experiments demonstrate that our method achieves superior results than state-of-the-art approaches in a few-shot setting.

摘要

近年来,面部风格化取得了显著进展。然而,在有限数据上进行训练时,现有方法的性能会显著下降。尽管一些研究试图解决这个问题,但它们要么未能实现少样本设置(少于10个样本),要么只能得到次优结果。在本文中,我们提出了生成对抗网络先验蒸馏(GAN Prior Distillation,GPD)方法,以实现有效的少样本面部风格化。GPD包含两个模型:一个具有GAN先验的教师网络和一个实现端到端翻译的学生网络。具体来说,我们使用少量样本将在源域大规模数据上训练的教师网络适配到目标域,使其能够学习目标域的知识。然后,我们可以通过使用相同的潜在代码同时生成源域和目标域图像来实现少样本增强。我们提出了一个基于锚点的知识蒸馏模块,该模块可以充分利用训练数据和增强数据之间的差异,将教师网络的知识蒸馏到学生网络中。经过训练的学生网络通过吸收额外知识实现了出色的泛化性能。定性和定量实验表明,在少样本设置下,我们的方法比现有方法取得了更好的结果。

相似文献

1
Few-Shot Face Stylization via GAN Prior Distillation.通过GAN先验蒸馏实现少样本面部风格化
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4492-4503. doi: 10.1109/TNNLS.2024.3377609. Epub 2025 Feb 28.
2
Towards efficient network compression via Few-Shot Slimming.通过少样本瘦身实现高效网络压缩。
Neural Netw. 2022 Mar;147:113-125. doi: 10.1016/j.neunet.2021.12.011. Epub 2021 Dec 24.
3
Dual Distillation Discriminator Networks for Domain Adaptive Few-Shot Learning.双蒸馏鉴别器网络用于领域自适应少样本学习。
Neural Netw. 2023 Aug;165:625-633. doi: 10.1016/j.neunet.2023.06.009. Epub 2023 Jun 15.
4
Generalized Knowledge Distillation via Relationship Matching.通过关系匹配实现广义知识蒸馏
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):1817-1834. doi: 10.1109/TPAMI.2022.3160328. Epub 2023 Jan 6.
5
Research on a Cross-Domain Few-Shot Adaptive Classification Algorithm Based on Knowledge Distillation Technology.基于知识蒸馏技术的跨域少样本自适应分类算法研究
Sensors (Basel). 2024 Mar 18;24(6):1939. doi: 10.3390/s24061939.
6
Spectral Decomposition and Transformation for Cross-domain Few-shot Learning.谱分解与跨域少样本学习转换。
Neural Netw. 2024 Nov;179:106536. doi: 10.1016/j.neunet.2024.106536. Epub 2024 Jul 14.
7
Few-Shot Graph Anomaly Detection via Dual-Level Knowledge Distillation.通过双级知识蒸馏实现少样本图异常检测
Entropy (Basel). 2025 Jan 1;27(1):28. doi: 10.3390/e27010028.
8
Knowledge Distillation for Face Photo-Sketch Synthesis.知识蒸馏在人脸照片素描合成中的应用。
IEEE Trans Neural Netw Learn Syst. 2022 Feb;33(2):893-906. doi: 10.1109/TNNLS.2020.3030536. Epub 2022 Feb 3.
9
Adversarial learning-based multi-level dense-transmission knowledge distillation for AP-ROP detection.基于对抗学习的多级密集传输知识蒸馏用于早产儿视网膜病变(AP-ROP)检测
Med Image Anal. 2023 Feb;84:102725. doi: 10.1016/j.media.2022.102725. Epub 2022 Dec 9.
10
Self-augmentation: Generalizing deep networks to unseen classes for few-shot learning.自增强:用于小样本学习的未见类别的深度网络泛化。
Neural Netw. 2021 Jun;138:140-149. doi: 10.1016/j.neunet.2021.02.007. Epub 2021 Feb 17.