Suppr超能文献

ProbIBR: Fast Image-Based Rendering With Learned Probability-Guided Sampling.

作者信息

Zhou Yuemei, Yu Tao, Zheng Zerong, Wu Gaochang, Zhao Guihua, Jiang Wenbo, Fu Ying, Liu Yebin

出版信息

IEEE Trans Vis Comput Graph. 2025 Mar;31(3):1888-1901. doi: 10.1109/TVCG.2024.3372152. Epub 2025 Jan 30.

Abstract

We present a general, fast, and practical solution for interpolating novel views of diverse real-world scenes given a sparse set of nearby views. Existing generic novel view synthesis methods rely on time-consuming scene geometry pre-computation or redundant sampling of the entire space for neural volumetric rendering, limiting the overall efficiency. Instead, we incorporate learned MVS priors into the neural volume rendering pipeline while improving the rendering efficiency by reducing sampling points under the guidance of depth probability distributions. Specifically, fewer but important points are sampled under the guidance of depth probability distributions extracted from the learned MVS architecture. Based on the learned probability-guided sampling, we develop a sophisticated neural volume rendering module that effectively integrates source view information with the learned scene structures. We further propose confidence-aware refinement to improve the rendering results in uncertain, occluded, and unreferenced regions. Moreover, we build a four-view camera system for holographic display and provide a real-time version of our framework for free-viewpoint experience, where novel view images of a spatial resolution of 512×512 can be rendered at around 20 fps on a single GTX 3090 GPU. Experiments show that our method achieves 15 to 40 times faster rendering compared to state-of-the-art baselines, with strong generalization capacity and comparable high-quality novel view synthesis performance.

摘要

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验