• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

CuFusion:基于长方体的精确实时相机跟踪与体积场景重建

CuFusion: Accurate Real-Time Camera Tracking and Volumetric Scene Reconstruction with a Cuboid.

作者信息

Zhang Chen, Hu Yu

机构信息

College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China.

出版信息

Sensors (Basel). 2017 Oct 1;17(10):2260. doi: 10.3390/s17102260.

DOI:10.3390/s17102260
PMID:28974030
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5677406/
Abstract

Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge of the precisely manufactured cuboid reference object, we keep drift-free camera tracking without explicit global optimization; (b) we improve the fineness of the volumetric surface representation by proposing a prediction-corrected data fusion strategy rather than a simple moving average, which enables accurate reconstruction of high-frequency details such as the sharp edges of objects and geometries of high curvature; (c) we introduce a benchmark dataset CU3D that contains both synthetic and real-world scanning sequences with ground-truth camera trajectories and surface models for the quantitative evaluation of 3D reconstruction algorithms. We test our algorithm on our dataset and demonstrate its accuracy compared with other state-of-the-art algorithms. We release both our dataset and code as open-source (https://github.com/zhangxaochen/CuFusion) for other researchers to reproduce and verify our results.

摘要

给定一系列深度图像,且场景中存在已知的长方体参考物体,我们提出了一种新颖的方法,可实时进行精确的相机跟踪和体积表面重建。本文的贡献主要有三点:(a) 利用精确制造的长方体参考物体的先验知识,无需进行显式全局优化即可实现无漂移相机跟踪;(b) 我们提出了一种预测校正数据融合策略,而非简单的移动平均,从而提高了体积表面表示的精细度,能够精确重建高频细节,如物体的锐利边缘和高曲率几何形状;(c) 我们引入了一个基准数据集CU3D,其中包含合成和真实世界扫描序列以及地面真值相机轨迹和表面模型,用于对3D重建算法进行定量评估。我们在数据集上测试了算法,并与其他现有最先进算法相比展示了其准确性。我们将数据集和代码作为开源发布 (https://github.com/zhangxaochen/CuFusion),供其他研究人员重现和验证我们的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/8c0856088338/sensors-17-02260-g013a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/8dc22b162a20/sensors-17-02260-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/8d4637dea954/sensors-17-02260-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/0353453a96b7/sensors-17-02260-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/6e502fa9a64f/sensors-17-02260-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/6ed6cf154e30/sensors-17-02260-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/1893bb609fee/sensors-17-02260-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/35c565413cb1/sensors-17-02260-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/9313699f6168/sensors-17-02260-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/4f1008ad2e28/sensors-17-02260-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/9f82ea63885c/sensors-17-02260-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/d1746445b0f1/sensors-17-02260-g011a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/5d1e948d3e8c/sensors-17-02260-g012a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/8c0856088338/sensors-17-02260-g013a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/8dc22b162a20/sensors-17-02260-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/8d4637dea954/sensors-17-02260-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/0353453a96b7/sensors-17-02260-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/6e502fa9a64f/sensors-17-02260-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/6ed6cf154e30/sensors-17-02260-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/1893bb609fee/sensors-17-02260-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/35c565413cb1/sensors-17-02260-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/9313699f6168/sensors-17-02260-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/4f1008ad2e28/sensors-17-02260-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/9f82ea63885c/sensors-17-02260-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/d1746445b0f1/sensors-17-02260-g011a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/5d1e948d3e8c/sensors-17-02260-g012a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f666/5677406/8c0856088338/sensors-17-02260-g013a.jpg

相似文献

1
CuFusion: Accurate Real-Time Camera Tracking and Volumetric Scene Reconstruction with a Cuboid.CuFusion:基于长方体的精确实时相机跟踪与体积场景重建
Sensors (Basel). 2017 Oct 1;17(10):2260. doi: 10.3390/s17102260.
2
SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.基于 SLAM 的单目微创手术中密集表面重建及其在增强现实中的应用。
Comput Methods Programs Biomed. 2018 May;158:135-146. doi: 10.1016/j.cmpb.2018.02.006. Epub 2018 Feb 8.
3
Robust and Efficient CPU-Based RGB-D Scene Reconstruction.基于 CPU 的鲁棒高效 RGB-D 场景重建。
Sensors (Basel). 2018 Oct 28;18(11):3652. doi: 10.3390/s18113652.
4
NeRF-OR: neural radiance fields for operating room scene reconstruction from sparse-view RGB-D videos.NeRF-OR:用于从稀疏视图RGB-D视频重建手术室场景的神经辐射场
Int J Comput Assist Radiol Surg. 2025 Jan;20(1):147-156. doi: 10.1007/s11548-024-03261-5. Epub 2024 Sep 13.
5
Comprehensive Use of Curvature for Robust and Accurate Online Surface Reconstruction.综合曲率用于稳健、精确的在线曲面重建。
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2349-2365. doi: 10.1109/TPAMI.2017.2648803. Epub 2017 Jan 5.
6
Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation.使用多模态传感器融合和语义分割技术稳定和验证三维物体位置。
Sensors (Basel). 2020 Feb 18;20(4):1110. doi: 10.3390/s20041110.
7
Line-Based 6-DoF Object Pose Estimation and Tracking With an Event Camera.基于线的六自由度物体姿态估计与事件相机跟踪
IEEE Trans Image Process. 2024;33:4765-4780. doi: 10.1109/TIP.2024.3445736. Epub 2024 Aug 30.
8
VisEvent: Reliable Object Tracking via Collaboration of Frame and Event Flows.视觉事件:通过帧流与事件流协作实现可靠目标跟踪
IEEE Trans Cybern. 2024 Mar;54(3):1997-2010. doi: 10.1109/TCYB.2023.3318601. Epub 2024 Feb 9.
9
3D object reconstruction: A comprehensive view-dependent dataset.3D对象重建:一个全面的视图相关数据集。
Data Brief. 2024 Jun 2;55:110569. doi: 10.1016/j.dib.2024.110569. eCollection 2024 Aug.
10
EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos.内镜 SLAM 数据集和一种用于内镜视频的无监督单目视觉里程计和深度估计方法。
Med Image Anal. 2021 Jul;71:102058. doi: 10.1016/j.media.2021.102058. Epub 2021 Apr 15.

引用本文的文献

1
Incremental 3D Cuboid Modeling with Drift Compensation.带漂移补偿的三维递增长方体建模。
Sensors (Basel). 2019 Jan 6;19(1):178. doi: 10.3390/s19010178.
2
Robust and Efficient CPU-Based RGB-D Scene Reconstruction.基于 CPU 的鲁棒高效 RGB-D 场景重建。
Sensors (Basel). 2018 Oct 28;18(11):3652. doi: 10.3390/s18113652.

本文引用的文献

1
Comprehensive Use of Curvature for Robust and Accurate Online Surface Reconstruction.综合曲率用于稳健、精确的在线曲面重建。
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2349-2365. doi: 10.1109/TPAMI.2017.2648803. Epub 2017 Jan 5.
2
Structural modeling from depth images.基于深度图像的结构建模。
IEEE Trans Vis Comput Graph. 2015 Nov;21(11):1230-40. doi: 10.1109/TVCG.2015.2459831. Epub 2015 Jul 23.
3
Accuracy and resolution of Kinect depth data for indoor mapping applications.用于室内制图应用的 Kinect 深度数据的准确性和分辨率。
Sensors (Basel). 2012;12(2):1437-54. doi: 10.3390/s120201437. Epub 2012 Feb 1.