Suppr超能文献

PAPRec:基于先验引导自适应概率网络的三维点云重建

PAPRec: 3D Point Cloud Reconstruction Based on Prior-Guided Adaptive Probabilistic Network.

作者信息

Liu Caixia, Zhu Minhong, Chen Yali, Wei Xiulan, Li Haisheng

机构信息

Beijing Key Laboratory of Big Data Technology for Food Safety, School of Computer and Artificial Intelligence, Beijing Technology and Business University, No.33, Fucheng Road, Haidian District, Beijing 100048, China.

School of Logistics, Beijing Wuzi University, No.321, Fuhe Street, Tongzhou District, Beijing 101149, China.

出版信息

Sensors (Basel). 2025 Feb 22;25(5):1354. doi: 10.3390/s25051354.

Abstract

Inferring a complete 3D shape from a single-view image is an ill-posed problem. The proposed methods often have problems such as insufficient feature expression, unstable training and limited constraints, resulting in a low accuracy and ambiguity reconstruction. To address these problems, we propose a prior-guided adaptive probabilistic network for single-view 3D reconstruction, called PAPRec. In the training stage, PAPRec encodes a single-view image and its corresponding 3D prior into image feature distribution and point cloud feature distribution, respectively. PAPRec then utilizes a latent normalizing flow to fit the two distributions and obtains a latent vector with rich cues. PAPRec finally introduces an adaptive probabilistic network consisting of a shape normalizing flow and a diffusion model in order to decode the latent vector as a complete 3D point cloud. Unlike the proposed methods, PAPRec fully learns the global and local features of objects by innovatively integrating 3D prior guidance and the adaptive probability network under the optimization of a loss function combining prior, flow and diffusion losses. The experimental results on the public ShapeNet dataset show that PAPRec, on average, improves CD by 2.62%, EMD by 5.99% and F1 by 4.41%, in comparison to several state-of-the-art methods.

摘要

从单视图图像推断完整的3D形状是一个不适定问题。所提出的方法常常存在诸如特征表达不足、训练不稳定和约束有限等问题,导致重建精度低且存在歧义。为了解决这些问题,我们提出了一种用于单视图3D重建的先验引导自适应概率网络,称为PAPRec。在训练阶段,PAPRec分别将单视图图像及其相应的3D先验编码为图像特征分布和点云特征分布。然后,PAPRec利用潜在归一化流来拟合这两个分布,并获得一个具有丰富线索的潜在向量。PAPRec最后引入了一个由形状归一化流和扩散模型组成的自适应概率网络,以便将潜在向量解码为完整的3D点云。与所提出的方法不同,PAPRec通过在结合先验、流和扩散损失的损失函数优化下,创新性地整合3D先验引导和自适应概率网络,充分学习对象的全局和局部特征。在公开的ShapeNet数据集上的实验结果表明,与几种最先进的方法相比,PAPRec平均将CD提高了2.62%,将EMD提高了5.99%,将F1提高了4.41%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/871c/11902572/c557ec449b95/sensors-25-01354-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验