• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可重光照体积面部的深入分析

A Deeper Analysis of Volumetric Relightable Faces.

作者信息

Rao Pramod, Mallikarjun B R, Fox Gereon, Weyrich Tim, Bickel Bernd, Pfister Hanspeter, Matusik Wojciech, Zhan Fangneng, Tewari Ayush, Theobalt Christian, Elgharib Mohamed

机构信息

Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany.

Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.

出版信息

Int J Comput Vis. 2024;132(4):1148-1166. doi: 10.1007/s11263-023-01899-3. Epub 2023 Oct 31.

DOI:10.1007/s11263-023-01899-3
PMID:38549787
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10965625/
Abstract

Portrait viewpoint and illumination editing is an important problem with several applications in VR/AR, movies, and photography. Comprehensive knowledge of geometry and illumination is critical for obtaining photorealistic results. Current methods are unable to explicitly model in 3 while handling both viewpoint and illumination editing from a single image. In this paper, we propose VoRF, a novel approach that can take even a single portrait image as input and relight human heads under novel illuminations that can be viewed from arbitrary viewpoints. VoRF represents a human head as a continuous volumetric field and learns a prior model of human heads using a coordinate-based MLP with individual latent spaces for identity and illumination. The prior model is learned in an auto-decoder manner over a diverse class of head shapes and appearances, allowing VoRF to generalize to novel test identities from a single input image. Additionally, VoRF has a reflectance MLP that uses the intermediate features of the prior model for rendering One-Light-at-A-Time (OLAT) images under novel views. We synthesize novel illuminations by combining these OLAT images with target environment maps. Qualitative and quantitative evaluations demonstrate the effectiveness of VoRF for relighting and novel view synthesis, even when applied to unseen subjects under uncontrolled illumination. This work is an extension of Rao et al. (VoRF: Volumetric Relightable Faces 2022). We provide extensive evaluation and ablative studies of our model and also provide an application, where any face can be relighted using textual input.

摘要

人像视角与光照编辑是一个重要问题,在虚拟现实/增强现实、电影和摄影等领域有多种应用。全面掌握几何和光照知识对于获得逼真的效果至关重要。当前的方法在从单张图像处理视角和光照编辑时,无法在三维空间中进行显式建模。在本文中,我们提出了VoRF,这是一种新颖的方法,它甚至可以将单张人像图像作为输入,并在可以从任意视角查看的新光照下对人头进行重新打光。VoRF将人头表示为一个连续的体素场,并使用具有用于身份和光照的单独潜在空间的基于坐标的多层感知器来学习人头的先验模型。先验模型是以自动解码器的方式在各种不同的头部形状和外观上学习的,这使得VoRF能够从单张输入图像推广到新的测试身份。此外,VoRF有一个反射率多层感知器,它使用先验模型的中间特征在新视角下渲染逐次单光(OLAT)图像。我们通过将这些OLAT图像与目标环境贴图相结合来合成新的光照。定性和定量评估表明,即使应用于在不受控制的光照下的未见过的对象,VoRF在重新打光和新视角合成方面也是有效的。这项工作是Rao等人(VoRF:体积可重新打光的面部,2022年)工作的扩展。我们对我们的模型进行了广泛的评估和消融研究,还提供了一个应用程序,在其中可以使用文本输入对任何面部进行重新打光。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/2127c24ba7e7/11263_2023_1899_Fig25_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/6794f07cb50a/11263_2023_1899_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/a17a525112ed/11263_2023_1899_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/3d296717b089/11263_2023_1899_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/22349c299dd7/11263_2023_1899_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/fa2a418b4781/11263_2023_1899_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/8d82e66965cb/11263_2023_1899_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/7b62a87bf87d/11263_2023_1899_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/546e62ce2f96/11263_2023_1899_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/574395149b3d/11263_2023_1899_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/259ac97b6c5d/11263_2023_1899_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/320a22d53c8e/11263_2023_1899_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/1082252b0932/11263_2023_1899_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/0dc3a33aa506/11263_2023_1899_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/89808d3a4d02/11263_2023_1899_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/afa5d26837c8/11263_2023_1899_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/f60fddbbadb1/11263_2023_1899_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/621602a4ac46/11263_2023_1899_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/000df8253ca1/11263_2023_1899_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/bac424f551cb/11263_2023_1899_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/e77eae91cd18/11263_2023_1899_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/003214d03e3c/11263_2023_1899_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/a5cfc1ccc9de/11263_2023_1899_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/d32f1468a864/11263_2023_1899_Fig23_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/feb7d4ea83fa/11263_2023_1899_Fig24_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/2127c24ba7e7/11263_2023_1899_Fig25_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/6794f07cb50a/11263_2023_1899_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/a17a525112ed/11263_2023_1899_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/3d296717b089/11263_2023_1899_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/22349c299dd7/11263_2023_1899_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/fa2a418b4781/11263_2023_1899_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/8d82e66965cb/11263_2023_1899_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/7b62a87bf87d/11263_2023_1899_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/546e62ce2f96/11263_2023_1899_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/574395149b3d/11263_2023_1899_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/259ac97b6c5d/11263_2023_1899_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/320a22d53c8e/11263_2023_1899_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/1082252b0932/11263_2023_1899_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/0dc3a33aa506/11263_2023_1899_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/89808d3a4d02/11263_2023_1899_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/afa5d26837c8/11263_2023_1899_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/f60fddbbadb1/11263_2023_1899_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/621602a4ac46/11263_2023_1899_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/000df8253ca1/11263_2023_1899_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/bac424f551cb/11263_2023_1899_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/e77eae91cd18/11263_2023_1899_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/003214d03e3c/11263_2023_1899_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/a5cfc1ccc9de/11263_2023_1899_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/d32f1468a864/11263_2023_1899_Fig23_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/feb7d4ea83fa/11263_2023_1899_Fig24_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1495/10965625/2127c24ba7e7/11263_2023_1899_Fig25_HTML.jpg

相似文献

1
A Deeper Analysis of Volumetric Relightable Faces.可重光照体积面部的深入分析
Int J Comput Vis. 2024;132(4):1148-1166. doi: 10.1007/s11263-023-01899-3. Epub 2023 Oct 31.
2
Designing an Illumination-Aware Network for Deep Image Relighting.
IEEE Trans Image Process. 2022;31:5396-5411. doi: 10.1109/TIP.2022.3195366. Epub 2022 Aug 17.
3
LEIFR-Net: light estimation for implicit face relight network.LEIFR-Net:用于隐式人脸重光照网络的光照估计
Opt Express. 2024 Feb 12;32(4):4827-4838. doi: 10.1364/OE.510060.
4
UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene.UPST-NeRF:用于3D场景的神经辐射场通用逼真风格迁移
IEEE Trans Vis Comput Graph. 2025 Apr;31(4):2045-2057. doi: 10.1109/TVCG.2024.3378692. Epub 2025 Feb 27.
5
Relightable Detailed Human Reconstruction From Sparse Flashlight Images.基于稀疏手电筒图像的可重新点亮的详细人体重建
IEEE Trans Vis Comput Graph. 2025 Sep;31(9):5519-5531. doi: 10.1109/TVCG.2024.3450591.
6
HVTR++: Image and Pose Driven Human Avatars Using Hybrid Volumetric-Textural Rendering.HVTR++:使用混合体绘制技术进行基于图像和姿势的人类虚拟形象驱动
IEEE Trans Vis Comput Graph. 2024 Aug;30(8):5478-5492. doi: 10.1109/TVCG.2023.3297721. Epub 2024 Jul 1.
7
Relighting photographs of tree canopies.树冠照片的重光照处理。
IEEE Trans Vis Comput Graph. 2011 Oct;17(10):1459-74. doi: 10.1109/TVCG.2010.236.
8
Face recognition from a single training image under arbitrary unknown lighting using spherical harmonics.利用球谐函数在任意未知光照条件下从单张训练图像进行人脸识别。
IEEE Trans Pattern Anal Mach Intell. 2006 Mar;28(3):351-63. doi: 10.1109/TPAMI.2006.53.
9
Symmetrical Viewpoint Representations in Face-Selective Regions Convey an Advantage in the Perception and Recognition of Faces.面部选择性区域中的对称视角表示在面孔的感知和识别中具有优势。
J Neurosci. 2019 May 8;39(19):3741-3751. doi: 10.1523/JNEUROSCI.1977-18.2019. Epub 2019 Mar 6.
10
Neural Radiance Fields From Sparse RGB-D Images for High-Quality View Synthesis.基于稀疏 RGB-D 图像的神经辐射场进行高质量视图合成。
IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):8713-8728. doi: 10.1109/TPAMI.2022.3232502. Epub 2023 Jun 5.

本文引用的文献

1
AvatarMe: Facial Shape and BRDF Inference With Photorealistic Rendering-Aware GANs.AvatarMe:基于逼真渲染感知 GAN 的面部形状和 BRDF 推断。
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9269-9284. doi: 10.1109/TPAMI.2021.3125598. Epub 2022 Nov 7.
2
SfSNet: Learning Shape, Reflectance and Illuminance of Faces in the Wild.SfSNet:学习野外环境下人脸的形状、反射率和光照度。
IEEE Trans Pattern Anal Mach Intell. 2022 Jun;44(6):3272-3284. doi: 10.1109/TPAMI.2020.3046915. Epub 2022 May 5.
3
A Style-Based Generator Architecture for Generative Adversarial Networks.
基于风格的生成对抗网络生成器架构。
IEEE Trans Pattern Anal Mach Intell. 2021 Dec;43(12):4217-4228. doi: 10.1109/TPAMI.2020.2970919. Epub 2021 Nov 3.