Liu Caixia, Zhu Minhong, Li Haisheng, Wei Xiulan, Liang Jiulin, Yao Qianwen
Beijing Key Laboratory of Big Data Technology for Food Safety, School of Computer and Artificial Intelligence, Beijing Technology and Business University, No. 33, Fucheng Road, Haidian District, Beijing 100048, China.
School of Logistics, Beijing Wuzi University, No. 321, Fuhe Street, Tongzhou District, Beijing 101149, China.
Sensors (Basel). 2025 Feb 28;25(5):1503. doi: 10.3390/s25051503.
With the widespread adoption of 3D scanning technology, depth view-driven 3D reconstruction has become crucial for applications such as SLAM, virtual reality, and autonomous vehicles. However, due to the effects of self-occlusion and environmental occlusion, obtaining complete and error-free 3D shapes directly from 3D scans remains challenging, as previous reconstruction methods tend to lose details. To this end, we propose Dynamic Quality Refinement Network (DQRNet) for reconstructing complete and accurate 3D shape from a single depth view. DQRNet introduces a dynamic encoder-decoder and a detail quality refiner to generate high-resolution 3D shapes, where the former designs a dynamic latent extractor to adaptively select important parts of an object and the latter designs global and local point refiners to enhance the reconstruction quality. Experimental results show that DQRNet is able to focus on capturing the details at boundaries and key areas on ShapeNet dataset, thereby achieving better accuracy and robustness than SOTA methods.
随着3D扫描技术的广泛应用,深度视图驱动的3D重建对于诸如同步定位与地图构建(SLAM)、虚拟现实和自动驾驶车辆等应用变得至关重要。然而,由于自遮挡和环境遮挡的影响,直接从3D扫描中获取完整且无误差的3D形状仍然具有挑战性,因为先前的重建方法往往会丢失细节。为此,我们提出了动态质量细化网络(DQRNet),用于从单个深度视图重建完整且准确的3D形状。DQRNet引入了一个动态编码器-解码器和一个细节质量细化器来生成高分辨率的3D形状,其中前者设计了一个动态潜在特征提取器以自适应地选择物体的重要部分,而后者设计了全局和局部点细化器以提高重建质量。实验结果表明,DQRNet能够专注于捕捉ShapeNet数据集边界和关键区域的细节,从而比现有最优方法实现更好的准确性和鲁棒性。