• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Reconstructing Reflection Maps Using a Stacked-CNN for Mixed Reality Rendering.

作者信息

Chalmers Andrew, Zhao Junhong, Medeiros Daniel, Rhee Taehyun

出版信息

IEEE Trans Vis Comput Graph. 2021 Oct;27(10):4073-4084. doi: 10.1109/TVCG.2020.3001917. Epub 2021 Sep 1.

DOI:10.1109/TVCG.2020.3001917
PMID:32746261
Abstract

Corresponding lighting and reflectance between real and virtual objects is important for spatial presence in augmented and mixed reality (AR and MR) applications. We present a method to reconstruct real-world environmental lighting, encoded as a reflection map (RM), from a conventional photograph. To achieve this, we propose a stacked convolutional neural network (SCNN) that predicts high dynamic range (HDR) 360 RMs with varying roughness from a limited field of view, low dynamic range photograph. The SCNN is progressively trained from high to low roughness to predict RMs at varying roughness levels, where each roughness level corresponds to a virtual object's roughness (from diffuse to glossy) for rendering. The predicted RM provides high-fidelity rendering of virtual objects to match with the background photograph. We illustrate the use of our method with indoor and outdoor scenes trained on separate indoor/outdoor SCNNs showing plausible rendering and composition of virtual objects in AR/MR. We show that our method has improved quality over previous methods with a comparative user study and error metrics.

摘要

相似文献

1
Reconstructing Reflection Maps Using a Stacked-CNN for Mixed Reality Rendering.
IEEE Trans Vis Comput Graph. 2021 Oct;27(10):4073-4084. doi: 10.1109/TVCG.2020.3001917. Epub 2021 Sep 1.
2
Adaptive Light Estimation using Dynamic Filtering for Diverse Lighting Conditions.
IEEE Trans Vis Comput Graph. 2021 Nov;27(11):4097-4106. doi: 10.1109/TVCG.2021.3106497. Epub 2021 Oct 27.
3
Physically-inspired Deep Light Estimation from a Homogeneous-Material Object for Mixed Reality Lighting.基于均质材料物体的物理启发式深度光照估计用于混合现实照明
IEEE Trans Vis Comput Graph. 2020 May;26(5):2002-2011. doi: 10.1109/TVCG.2020.2973050. Epub 2020 Feb 13.
4
Augmented Virtual Teleportation for High-Fidelity Telecollaboration.用于高保真远程协作的增强型虚拟瞬移
IEEE Trans Vis Comput Graph. 2020 May;26(5):1923-1933. doi: 10.1109/TVCG.2020.2973065. Epub 2020 Feb 13.
5
Real-Time Lighting Estimation for Augmented Reality via Differentiable Screen-Space Rendering.通过可微屏幕空间渲染实现增强现实的实时光照估计
IEEE Trans Vis Comput Graph. 2023 Apr;29(4):2132-2145. doi: 10.1109/TVCG.2022.3141943. Epub 2023 Feb 28.
6
Realistic real-time outdoor rendering in augmented reality.增强现实中的逼真实时户外渲染。
PLoS One. 2014 Sep 30;9(9):e108334. doi: 10.1371/journal.pone.0108334. eCollection 2014.
7
LivePhantom: Retrieving Virtual World Light Data to Real Environments.实时虚拟模型:将虚拟世界光照数据提取到真实环境中。
PLoS One. 2016 Dec 8;11(12):e0166424. doi: 10.1371/journal.pone.0166424. eCollection 2016.
8
Acoustic Classification and Optimization for Multi-Modal Rendering of Real-World Scenes.真实场景的多模态渲染的声学分类与优化。
IEEE Trans Vis Comput Graph. 2018 Mar;24(3):1246-1259. doi: 10.1109/TVCG.2017.2666150. Epub 2017 Feb 9.
9
Long-Range Augmented Reality with Dynamic Occlusion Rendering.具有动态遮挡渲染的远程增强现实
IEEE Trans Vis Comput Graph. 2021 Nov;27(11):4236-4244. doi: 10.1109/TVCG.2021.3106434. Epub 2021 Oct 27.
10
Mobile augmented reality based indoor map for improving geo-visualization.基于移动增强现实的室内地图以改善地理可视化。
PeerJ Comput Sci. 2021 Sep 2;7:e704. doi: 10.7717/peerj-cs.704. eCollection 2021.

引用本文的文献

1
Hierarchical mussel farm reconstruction from video with object tracking.基于目标跟踪的贻贝养殖场分层视频重建
J R Soc N Z. 2024 Apr 25;55(6):1563-1588. doi: 10.1080/03036758.2024.2345316. eCollection 2025.