• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

PNRNet:用于任意到任意重光照的物理启发式神经渲染

PNRNet: Physically-Inspired Neural Rendering for Any-to-Any Relighting.

作者信息

Hu Zhongyun, Nsampi Ntumba Elie, Wang Xue, Wang Qing

出版信息

IEEE Trans Image Process. 2022;31:3935-3948. doi: 10.1109/TIP.2022.3177311. Epub 2022 Jun 9.

DOI:10.1109/TIP.2022.3177311
PMID:35635816
Abstract

Existing any-to-any relighting methods suffer from the task-aliasing effects and the loss of local details in the image generation process, such as shading and attached-shadow. In this paper, we present PNRNet, a novel neural architecture that decomposes the any-to-any relighting task into three simpler sub-tasks, i.e. lighting estimation, color temperature transfer, and lighting direction transfer, to avoid the task-aliasing effects. These sub-tasks are easy to learn and can be trained with direct supervisions independently. To better preserve local shading and attached-shadow details, we propose a parallel multi-scale network that incorporates multiple physical attributes to model local illuminations for lighting direction transfer. We also introduce a simple yet effective color temperature transfer network to learn a pixel-level non-linear function which allows color temperature adjustment beyond the predefined color temperatures and generalizes well to real images. Extensive experiments demonstrate that our proposed approach achieves better results quantitatively and qualitatively than prior works.

摘要

现有的任意到任意的重光照方法在图像生成过程中存在任务混叠效应和局部细节丢失的问题,例如阴影和附着阴影。在本文中,我们提出了PNRNet,这是一种新颖的神经架构,它将任意到任意的重光照任务分解为三个更简单的子任务,即光照估计、色温转换和光照方向转换,以避免任务混叠效应。这些子任务易于学习,并且可以独立地通过直接监督进行训练。为了更好地保留局部阴影和附着阴影细节,我们提出了一种并行多尺度网络,该网络结合了多个物理属性来为光照方向转换建模局部光照。我们还引入了一个简单而有效的色温转换网络,以学习像素级非线性函数,该函数允许在预定义的色温之外进行色温调整,并且能很好地推广到真实图像。大量实验表明,我们提出的方法在定量和定性方面都比先前的工作取得了更好的结果。

相似文献

1
PNRNet: Physically-Inspired Neural Rendering for Any-to-Any Relighting.PNRNet:用于任意到任意重光照的物理启发式神经渲染
IEEE Trans Image Process. 2022;31:3935-3948. doi: 10.1109/TIP.2022.3177311. Epub 2022 Jun 9.
2
Designing an Illumination-Aware Network for Deep Image Relighting.
IEEE Trans Image Process. 2022;31:5396-5411. doi: 10.1109/TIP.2022.3195366. Epub 2022 Aug 17.
3
DeProCams: Simultaneous Relighting, Compensation and Shape Reconstruction for Projector-Camera Systems.
IEEE Trans Vis Comput Graph. 2021 May;27(5):2725-2735. doi: 10.1109/TVCG.2021.3067771. Epub 2021 Apr 15.
4
GMLight: Lighting Estimation via Geometric Distribution Approximation.GMLight:通过几何分布近似进行光照估计
IEEE Trans Image Process. 2022;31:2268-2278. doi: 10.1109/TIP.2022.3151997. Epub 2022 Mar 11.
5
A New Intrinsic-Lighting Color Space for Daytime Outdoor Images.一种用于白天户外图像的新型固有光照颜色空间。
IEEE Trans Image Process. 2017 Feb;26(2):1031-1039. doi: 10.1109/TIP.2016.2642788. Epub 2016 Dec 21.
6
MILO: Multi-Bounce Inverse Rendering for Indoor Scene With Light-Emitting Objects.MILO:具有发光物体的室内场景的多次反弹逆向渲染。
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):10129-10142. doi: 10.1109/TPAMI.2023.3244658. Epub 2023 Jun 30.
7
Inverse Rendering and Relighting From Multiple Color Plus Depth Images.基于多彩色加深度图像的逆向渲染与重光照。
IEEE Trans Image Process. 2017 Oct;26(10):4951-4961. doi: 10.1109/TIP.2017.2728184. Epub 2017 Jul 18.
8
Single image relighting based on illumination field reconstruction.基于光照场重建的单图像重光照
Opt Express. 2023 Aug 28;31(18):29676-29694. doi: 10.1364/OE.495858.
9
PBR-Net: Imitating Physically Based Rendering using Deep Neural Network.
IEEE Trans Image Process. 2020 Apr 16. doi: 10.1109/TIP.2020.2987169.
10
Separating Shading and Reflectance From Cartoon Illustrations.
IEEE Trans Vis Comput Graph. 2024 Jul;30(7):3664-3679. doi: 10.1109/TVCG.2023.3239364. Epub 2024 Jun 27.