Han Jiali, Shen Shuhan
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.
University of Chinese Academy of Sciences, Beijing, 100049, China.
Vis Comput Ind Biomed Art. 2019 Aug 7;2(1):10. doi: 10.1186/s42492-019-0020-y.
Image-based 3D modeling is an effective method for reconstructing large-scale scenes, especially city-level scenarios. In the image-based modeling pipeline, obtaining a watertight mesh model from a noisy multi-view stereo point cloud is a key step toward ensuring model quality. However, some state-of-the-art methods rely on the global Delaunay-based optimization formed by all the points and cameras; thus, they encounter scaling problems when dealing with large scenes. To circumvent these limitations, this study proposes a scalable point-cloud meshing approach to aid the reconstruction of city-scale scenes with minimal time consumption and memory usage. Firstly, the entire scene is divided along the x and y axes into several overlapping chunks so that each chunk can satisfy the memory limit. Then, the Delaunay-based optimization is performed to extract meshes for each chunk in parallel. Finally, the local meshes are merged together by resolving local inconsistencies in the overlapping areas between the chunks. We test the proposed method on three city-scale scenes with hundreds of millions of points and thousands of images, and demonstrate its scalability, accuracy, and completeness, compared with the state-of-the-art methods.
基于图像的三维建模是重建大规模场景,特别是城市级场景的有效方法。在基于图像的建模流程中,从有噪声的多视图立体点云获取一个封闭的网格模型是确保模型质量的关键步骤。然而,一些先进方法依赖于由所有点和相机形成的基于全局德劳内的优化;因此,它们在处理大场景时会遇到缩放问题。为了规避这些限制,本研究提出一种可扩展的点云网格化方法,以最少的时间消耗和内存使用来辅助城市规模场景的重建。首先,将整个场景沿x轴和y轴划分为几个重叠的块,以便每个块都能满足内存限制。然后,执行基于德劳内的优化以并行提取每个块的网格。最后,通过解决块之间重叠区域的局部不一致性将局部网格合并在一起。我们在具有数亿个点和数千张图像的三个城市规模场景上测试了所提出的方法,并与先进方法相比展示了其可扩展性、准确性和完整性。