• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于样本的3D人像风格化

Exemplar-Based 3D Portrait Stylization.

作者信息

Han Fangzhou, Ye Shuquan, He Mingming, Chai Menglei, Liao Jing

出版信息

IEEE Trans Vis Comput Graph. 2023 Feb;29(2):1371-1383. doi: 10.1109/TVCG.2021.3114308. Epub 2022 Dec 29.

DOI:10.1109/TVCG.2021.3114308
PMID:34559656
Abstract

Exemplar-based portrait stylization is widely attractive and highly desired. Despite recent successes, it remains challenging, especially when considering both texture and geometric styles. In this article, we present the first framework for one-shot 3D portrait style transfer, which can generate 3D face models with both the geometry exaggerated and the texture stylized while preserving the identity from the original content. It requires only one arbitrary style image instead of a large set of training examples for a particular style, provides geometry and texture outputs that are fully parameterized and disentangled, and enables further graphics applications with the 3D representations. The framework consists of two stages. In the first geometric style transfer stage, we use facial landmark translation to capture the coarse geometry style and guide the deformation of the dense 3D face geometry. In the second texture style transfer stage, we focus on performing style transfer on the canonical texture by adopting a differentiable renderer to optimize the texture in a multi-view framework. Experiments show that our method achieves robustly good results on different artistic styles and outperforms existing methods. We also demonstrate the advantages of our method via various 2D and 3D graphics applications.

摘要

基于样本的人像风格化极具吸引力且备受期待。尽管近期取得了一些成功,但它仍然具有挑战性,尤其是在同时考虑纹理和几何风格时。在本文中,我们提出了首个用于一次性3D人像风格迁移的框架,该框架能够生成几何形状夸张且纹理风格化的3D面部模型,同时保留原始内容的身份特征。它仅需一张任意风格的图像,而非针对特定风格的大量训练示例,提供完全参数化且解耦的几何形状和纹理输出,并能够利用3D表示进行进一步的图形应用。该框架由两个阶段组成。在第一个几何风格迁移阶段,我们使用面部地标平移来捕捉粗略的几何风格,并引导密集3D面部几何形状的变形。在第二个纹理风格迁移阶段,我们通过采用可微渲染器在多视图框架中优化纹理,专注于对规范纹理进行风格迁移。实验表明,我们的方法在不同艺术风格上均取得了稳健的良好效果,并且优于现有方法。我们还通过各种2D和3D图形应用展示了我们方法的优势。

相似文献

1
Exemplar-Based 3D Portrait Stylization.基于样本的3D人像风格化
IEEE Trans Vis Comput Graph. 2023 Feb;29(2):1371-1383. doi: 10.1109/TVCG.2021.3114308. Epub 2022 Dec 29.
2
MM-NeRF: Multimodal-Guided 3D Multi-Style Transfer of Neural Radiance Field.MM-NeRF:神经辐射场的多模态引导3D多风格转换
IEEE Trans Vis Comput Graph. 2025 Sep;31(9):5842-5853. doi: 10.1109/TVCG.2024.3476331.
3
UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene.UPST-NeRF:用于3D场景的神经辐射场通用逼真风格迁移
IEEE Trans Vis Comput Graph. 2025 Apr;31(4):2045-2057. doi: 10.1109/TVCG.2024.3378692. Epub 2025 Feb 27.
4
Portrait stylized rendering for 3D light-field display based on radiation field and example guide.基于辐射场和示例引导的用于3D光场显示的人像风格化渲染
Opt Express. 2023 Aug 28;31(18):29664-29675. doi: 10.1364/OE.494870.
5
RPD-GAN: Learning to Draw Realistic Paintings with Generative Adversarial Network.RPD-GAN:使用生成对抗网络学习绘制逼真画作。
IEEE Trans Image Process. 2020 Aug 28;PP. doi: 10.1109/TIP.2020.3018856.
6
StyleTalk++: A Unified Framework for Controlling the Speaking Styles of Talking Heads.StyleTalk++:用于控制说话人头的说话风格的统一框架。
IEEE Trans Pattern Anal Mach Intell. 2024 Jun;46(6):4331-4347. doi: 10.1109/TPAMI.2024.3357808. Epub 2024 May 7.
7
NeRF-Art: Text-Driven Neural Radiance Fields Stylization.NeRF-Art:文本驱动的神经辐射场风格化
IEEE Trans Vis Comput Graph. 2024 Aug;30(8):4983-4996. doi: 10.1109/TVCG.2023.3283400. Epub 2024 Jul 1.
8
Data-Driven Synthesis of Cartoon Faces Using Different Styles.基于数据驱动的不同风格卡通人脸合成
IEEE Trans Image Process. 2017 Jan;26(1):464-478. doi: 10.1109/TIP.2016.2628581. Epub 2016 Nov 14.
9
CSAST: Content self-supervised and style contrastive learning for arbitrary style transfer.CSAST:用于任意风格迁移的内容自监督和风格对比学习。
Neural Netw. 2023 Jul;164:146-155. doi: 10.1016/j.neunet.2023.04.037. Epub 2023 Apr 26.
10
MW-GAN: Multi-Warping GAN for Caricature Generation With Multi-Style Geometric Exaggeration.MW-GAN:用于多风格几何夸张漫画生成的多变形生成对抗网络
IEEE Trans Image Process. 2021;30:8644-8657. doi: 10.1109/TIP.2021.3118984. Epub 2021 Oct 20.