Suppr超能文献

SREGS:具有几何正则化和区域探索的稀疏视图高斯辐射场

SREGS: Sparse-view Gaussian radiance fields with geometric regularization and region exploration.

作者信息

Li Xiaotong, Li Kefeng, Zhang Guangyuan, Zhu Zhenfang, Wang Peng, Wang Zhenfei, Fu Chen, Zhang Yongshuo, Fan Zhiming, Zhao Yongpeng

机构信息

Shandong Key Laboratory of Technologies and Systems for Intelligent Construction Equipment, Shandong Jiaotong University, Jina, 250357, Shandong, China; School of Information Science and Electrical Engineering, Shandong Jiaotong University, Jina, 250357, Shandong, China.

Shandong Zhengyuan Yeda Environmental Technology Co, Jinan, 250101, Shandong, China.

出版信息

Neural Netw. 2025 Nov;191:107820. doi: 10.1016/j.neunet.2025.107820. Epub 2025 Jul 9.

Abstract

Recent advances in few-shot novel-view synthesis based on 3D Gaussian Splatting (3DGS) have shown remarkable progress. Existing methods usually rely on carefully designed geometric regularizers to reinforce geometric supervision; however, applying multiple regularizers consistently across scenes is hard to tune and often degrades robustness. Consequently, generating reliable geometry from extremely sparse viewpoints remains a key challenge. To overcome this limitation, we introduce SREGS, a framework tailored for few-shot reconstruction whose contributions focus on two aspects: explicitly consistent geometry and multi-scale depth-guided optimization. Specifically, to explicitly optimize reconstruction consistency, we initialize the point cloud with 2D Gaussians, thereby enhancing depth consistency for the same Gaussian observed from different views. Secondly, we employ region-adaptive rapid densificationn to fill under-covered regions with additional representations, while an opacity-aware noise term injects stochasticity into each Gaussian to boost exploration in under-observed areas. In addition, to strengthen geometric refinement of the radiance field, we impose multi-scale depth constraints based on a monocular depth prior, performing geometric refinement from global to local scales and ensuring highly accurate reconstruction. Extensive experiments on LLFF, MipNeRF360, and Blender show that SREGS achieves higher synthesis quality with lower computational cost and demonstrates robust performance. The code is available at:https://github.com/LeeXiaoTong1/SREGS.

摘要

基于3D高斯点云渲染(3DGS)的少样本新视角合成技术的最新进展已取得显著成果。现有方法通常依赖精心设计的几何正则化器来加强几何监督;然而,在不同场景中一致应用多个正则化器很难调整,且常常会降低稳健性。因此,从极其稀疏的视角生成可靠的几何形状仍然是一个关键挑战。为克服这一限制,我们引入了SREGS,这是一个专为少样本重建量身定制的框架,其贡献主要集中在两个方面:显式一致的几何形状和多尺度深度引导优化。具体而言,为了显式优化重建一致性,我们用二维高斯分布初始化点云,从而增强从不同视角观察到的同一高斯分布的深度一致性。其次,我们采用区域自适应快速致密化方法,用额外的表示填充覆盖不足的区域,同时一个透明度感知噪声项将随机性引入每个高斯分布,以促进在观察不足区域的探索。此外,为了加强辐射场的几何细化,我们基于单目深度先验施加多尺度深度约束,从全局到局部尺度进行几何细化,并确保高度精确的重建。在LLFF、MipNeRF360和Blender上进行的大量实验表明,SREGS以更低的计算成本实现了更高的合成质量,并展示了稳健的性能。代码可在以下网址获取:https://github.com/LeeXiaoTong1/SREGS

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验