• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

DQRNet:用于从单深度视图进行三维重建的动态质量优化网络

DQRNet: Dynamic Quality Refinement Network for 3D Reconstruction from a Single Depth View.

作者信息

Liu Caixia, Zhu Minhong, Li Haisheng, Wei Xiulan, Liang Jiulin, Yao Qianwen

机构信息

Beijing Key Laboratory of Big Data Technology for Food Safety, School of Computer and Artificial Intelligence, Beijing Technology and Business University, No. 33, Fucheng Road, Haidian District, Beijing 100048, China.

School of Logistics, Beijing Wuzi University, No. 321, Fuhe Street, Tongzhou District, Beijing 101149, China.

出版信息

Sensors (Basel). 2025 Feb 28;25(5):1503. doi: 10.3390/s25051503.

DOI:10.3390/s25051503
PMID:40096341
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11902519/
Abstract

With the widespread adoption of 3D scanning technology, depth view-driven 3D reconstruction has become crucial for applications such as SLAM, virtual reality, and autonomous vehicles. However, due to the effects of self-occlusion and environmental occlusion, obtaining complete and error-free 3D shapes directly from 3D scans remains challenging, as previous reconstruction methods tend to lose details. To this end, we propose Dynamic Quality Refinement Network (DQRNet) for reconstructing complete and accurate 3D shape from a single depth view. DQRNet introduces a dynamic encoder-decoder and a detail quality refiner to generate high-resolution 3D shapes, where the former designs a dynamic latent extractor to adaptively select important parts of an object and the latter designs global and local point refiners to enhance the reconstruction quality. Experimental results show that DQRNet is able to focus on capturing the details at boundaries and key areas on ShapeNet dataset, thereby achieving better accuracy and robustness than SOTA methods.

摘要

随着3D扫描技术的广泛应用,深度视图驱动的3D重建对于诸如同步定位与地图构建(SLAM)、虚拟现实和自动驾驶车辆等应用变得至关重要。然而,由于自遮挡和环境遮挡的影响,直接从3D扫描中获取完整且无误差的3D形状仍然具有挑战性,因为先前的重建方法往往会丢失细节。为此,我们提出了动态质量细化网络(DQRNet),用于从单个深度视图重建完整且准确的3D形状。DQRNet引入了一个动态编码器-解码器和一个细节质量细化器来生成高分辨率的3D形状,其中前者设计了一个动态潜在特征提取器以自适应地选择物体的重要部分,而后者设计了全局和局部点细化器以提高重建质量。实验结果表明,DQRNet能够专注于捕捉ShapeNet数据集边界和关键区域的细节,从而比现有最优方法实现更好的准确性和鲁棒性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a6a/11902519/0408d2da1b0d/sensors-25-01503-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a6a/11902519/c47d4bcd5fe5/sensors-25-01503-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a6a/11902519/038f23c8a6c5/sensors-25-01503-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a6a/11902519/9847ace46ac2/sensors-25-01503-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a6a/11902519/6d3aa4351afc/sensors-25-01503-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a6a/11902519/0408d2da1b0d/sensors-25-01503-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a6a/11902519/c47d4bcd5fe5/sensors-25-01503-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a6a/11902519/038f23c8a6c5/sensors-25-01503-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a6a/11902519/9847ace46ac2/sensors-25-01503-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a6a/11902519/6d3aa4351afc/sensors-25-01503-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6a6a/11902519/0408d2da1b0d/sensors-25-01503-g005.jpg

相似文献

1
DQRNet: Dynamic Quality Refinement Network for 3D Reconstruction from a Single Depth View.DQRNet:用于从单深度视图进行三维重建的动态质量优化网络
Sensors (Basel). 2025 Feb 28;25(5):1503. doi: 10.3390/s25051503.
2
Sym3DNet: Symmetric 3D Prior Network for Single-View 3D Reconstruction.Sym3DNet:用于单视图三维重建的对称三维先验网络。
Sensors (Basel). 2022 Jan 11;22(2):518. doi: 10.3390/s22020518.
3
View-Aware Geometry-Structure Joint Learning for Single-View 3D Shape Reconstruction.用于单视图3D形状重建的视图感知几何结构联合学习
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6546-6561. doi: 10.1109/TPAMI.2021.3090917. Epub 2022 Sep 15.
4
Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation.用于单目深度估计的残差视觉Transformer和自适应融合自动编码器
Sensors (Basel). 2024 Dec 26;25(1):80. doi: 10.3390/s25010080.
5
Glissando-Net: Deep Single View Category Level Pose Estimation and 3D Reconstruction.滑音网络:深度单视图类别级姿态估计与三维重建
IEEE Trans Pattern Anal Mach Intell. 2025 Apr;47(4):2298-2312. doi: 10.1109/TPAMI.2024.3519674. Epub 2025 Mar 6.
6
Single-view 3D reconstruction dual attention.单视图3D重建双注意力机制
PeerJ Comput Sci. 2024 Oct 22;10:e2403. doi: 10.7717/peerj-cs.2403. eCollection 2024.
7
Dense 3D Object Reconstruction from a Single Depth View.从单深度视图进行密集三维物体重建
IEEE Trans Pattern Anal Mach Intell. 2019 Dec;41(12):2820-2834. doi: 10.1109/TPAMI.2018.2868195. Epub 2018 Sep 3.
8
Cascaded Refinement Network for Point Cloud Completion With Self-Supervision.用于点云补全的自监督级联细化网络。
IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):8139-8150. doi: 10.1109/TPAMI.2021.3108410. Epub 2022 Oct 4.
9
Point Cloud Completion Network Applied to Vehicle Data.应用于车辆数据的点云补全网络
Sensors (Basel). 2022 Sep 27;22(19):7346. doi: 10.3390/s22197346.
10
Point Cloud Completion Via Skeleton-Detail Transformer.基于骨架-细节变压器的点云补全
IEEE Trans Vis Comput Graph. 2023 Oct;29(10):4229-4242. doi: 10.1109/TVCG.2022.3185247. Epub 2023 Sep 1.

本文引用的文献

1
What's the Situation With Intelligent Mesh Generation: A Survey and Perspectives.智能网格生成的现状:一项综述与展望
IEEE Trans Vis Comput Graph. 2024 Aug;30(8):4997-5017. doi: 10.1109/TVCG.2023.3281781. Epub 2024 Jul 1.
2
CP3: Unifying Point Cloud Completion by Pretrain-Prompt-Predict Paradigm.CP3:通过预训练-提示-预测范式统一点云补全。
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):9583-9594. doi: 10.1109/TPAMI.2023.3257026. Epub 2023 Jun 30.
3
Snowflake Point Deconvolution for Point Cloud Completion and Generation With Skip-Transformer.
基于 Skip-Transformer 的点云补全与生成的雪花点反卷积。
IEEE Trans Pattern Anal Mach Intell. 2023 May;45(5):6320-6338. doi: 10.1109/TPAMI.2022.3217161. Epub 2023 Apr 3.
4
Perceptual Quality Assessment of Colored 3D Point Clouds.彩色 3D 点云的感知质量评估。
IEEE Trans Vis Comput Graph. 2023 Aug;29(8):3642-3655. doi: 10.1109/TVCG.2022.3167151. Epub 2023 Jun 29.
5
Meta-PU: An Arbitrary-Scale Upsampling Network for Point Cloud.Meta-PU:一种用于点云的任意尺度上采样网络。
IEEE Trans Vis Comput Graph. 2022 Sep;28(9):3206-3218. doi: 10.1109/TVCG.2021.3058311. Epub 2022 Jul 29.
6
Dense 3D Object Reconstruction from a Single Depth View.从单深度视图进行密集三维物体重建
IEEE Trans Pattern Anal Mach Intell. 2019 Dec;41(12):2820-2834. doi: 10.1109/TPAMI.2018.2868195. Epub 2018 Sep 3.