Suppr超能文献

Learning Heterogeneous Mixture of Scene Experts for Large-Scale Neural Radiance Fields.

作者信息

Mi Zhenxing, Yin Ping, Xiao Xue, Xu Dan

出版信息

IEEE Trans Pattern Anal Mach Intell. 2025 Aug 27;PP. doi: 10.1109/TPAMI.2025.3603305.

Abstract

Recent Neural Radiance Field (NeRF) methods on large-scale scenes have demonstrated promising results and underlined the importance of scene decomposition for scalable NeRFs. Although these methods achieved reasonable scalability, there are several critical problems remaining unexplored in the existing large-scale NeRF modeling methods, i.e., learnable decomposition, modeling scene heterogeneity, and modeling efficiency. In this paper, we introduce Switch-NeRF++, a Heterogeneous Mixture of Hash Experts (HMoHE) network that addresses these challenges within a unified framework. Our framework is a highly scalable NeRF that learns heterogeneous decomposition and heterogeneous Neural Radiance Fields efficiently for large-scale scenes in an end-to-end manner. In our framework, a gating network learns to decompose scenes into partitions and allocates 3D points to specialized NeRF experts. This gating network is co-optimized with the experts by our proposed Sparsely Gated Mixture of Experts (MoE) NeRF framework. Our network architecture incorporates a hash-based gating network and distinct heterogeneous hash experts. The hash-based gating efficiently learns the decomposition of the large-scale scene. The distinct heterogeneous hash experts consist of hash grids of different resolution ranges. This enables effective learning of the heterogeneous representation of different decomposed scene parts within large-scale complex scenes. These design choices make our framework an end-to-end and highly scalable NeRF solution for real-world large-scale scene modeling to achieve both quality and efficiency. We evaluate our accuracy and scalability on existing large-scale NeRF datasets. Additionally, we also introduce a new dataset with very large-scale scenes ($ \gt 6.5km^{2}$) from UrbanBIS. Extensive experiments demonstrate that our approach can be easily scaled to various large-scale scenes and achieve state-of-the-art scene rendering accuracy. Furthermore, our method exhibits significant efficiency gains, with an 8x acceleration in training and a 16x acceleration in rendering compared to the best-performing competitor Switch-NeRF. The codes and trained models will be released in https://github.com/MiZhenxing/Switch-NeRF.

摘要

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验