Wurster Skylar W, Xiong Tianyu, Shen Han-Wei, Guo Hanqi, Peterka Tom
IEEE Trans Vis Comput Graph. 2024 Jan;30(1):965-974. doi: 10.1109/TVCG.2023.3327194. Epub 2023 Dec 27.
Scene representation networks (SRNs) have been recently proposed for compression and visualization of scientific data. However, state-of-the-art SRNs do not adapt the allocation of available network parameters to the complex features found in scientific data, leading to a loss in reconstruction quality. We address this shortcoming with an adaptively placed multi-grid SRN (APMGSRN) and propose a domain decomposition training and inference technique for accelerated parallel training on multi-GPU systems. We also release an open-source neural volume rendering application that allows plug-and-play rendering with any PyTorch-based SRN. Our proposed APMGSRN architecture uses multiple spatially adaptive feature grids that learn where to be placed within the domain to dynamically allocate more neural network resources where error is high in the volume, improving state-of-the-art reconstruction accuracy of SRNs for scientific data without requiring expensive octree refining, pruning, and traversal like previous adaptive models. In our domain decomposition approach for representing large-scale data, we train an set of APMGSRNs in parallel on separate bricks of the volume to reduce training time while avoiding overhead necessary for an out-of-core solution for volumes too large to fit in GPU memory. After training, the lightweight SRNs are used for realtime neural volume rendering in our open-source renderer, where arbitrary view angles and transfer functions can be explored. A copy of this paper, all code, all models used in our experiments, and all supplemental materials and videos are available at https://github.com/skywolf829/APMGSRN.
场景表示网络(SRN)最近被提出用于科学数据的压缩和可视化。然而,当前最先进的SRN并没有将可用网络参数的分配适应于科学数据中发现的复杂特征,导致重建质量下降。我们通过自适应放置的多网格SRN(APMGSRN)解决了这一缺点,并提出了一种域分解训练和推理技术,用于在多GPU系统上加速并行训练。我们还发布了一个开源神经体绘制应用程序,该程序允许与任何基于PyTorch的SRN进行即插即用的渲染。我们提出的APMGSRN架构使用多个空间自适应特征网格,这些网格学习在域内的放置位置,以便在体数据中误差较高的地方动态分配更多神经网络资源,从而在不需要像以前的自适应模型那样进行昂贵的八叉树细化、修剪和遍历的情况下,提高了SRN对科学数据的最先进重建精度。在我们用于表示大规模数据的域分解方法中,我们在体数据的单独块上并行训练一组APMGSRN,以减少训练时间,同时避免对于太大而无法放入GPU内存的体数据进行核外解决方案所需的开销。训练后,轻量级的SRN用于我们开源渲染器中的实时神经体绘制,在其中可以探索任意视角和传递函数。本文的副本、所有代码、我们实验中使用的所有模型以及所有补充材料和视频可在https://github.com/skywolf829/APMGSRN上获取。