• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

曼哈顿融合:基于深度序列的室内场景在线稠密重建。

ManhattanFusion: Online Dense Reconstruction of Indoor Scenes From Depth Sequences.

出版信息

IEEE Trans Vis Comput Graph. 2022 Jul;28(7):2668-2681. doi: 10.1109/TVCG.2020.3036868. Epub 2022 May 26.

DOI:10.1109/TVCG.2020.3036868
PMID:33170778
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9803263/
Abstract

We present a new framework for online dense 3D reconstruction of indoor scenes by using only depth sequences. This research is particularly useful in cases with a poor light condition or in a nearly featureless indoor environment. The lack of RGB information makes long-range camera pose estimation difficult in a large indoor environment. The key idea of our research is to take advantage of the geometric prior of Manhattan scenes in each stage of the reconstruction pipeline with the specific aim to reduce the cumulative registration error and overall odometry drift in a long sequence. This idea is further boosted by local Manhattan frame growing and the local-to-global strategy that leads to implicit loop closure handling for a large indoor scene. Our proposed pipeline, namely ManhattanFusion, starts with planar alignment and local pose optimization where the Manhattan constraints are imposed to create detailed local segments. These segments preserve intrinsic scene geometry by minimizing the odometry drift even under complex and long trajectories. The final model is generated by integrating all local segments into a global volumetric representation under the constraint of Manhattan frame-based registration across segments. Our algorithm outperforms others that use depth data only in terms of both the mean distance error and the absolute trajectory error, and it is also very competitive compared with RGB-D based reconstruction algorithms. Moreover, our algorithm outperforms the state-of-the-art in terms of the surface area coverage by 10-40 percent, largely due to the usefulness and effectiveness of the Manhattan assumption through the reconstruction pipeline.

摘要

我们提出了一种新的框架,用于仅使用深度序列在线密集重建室内场景的三维模型。这项研究在光照条件差或室内环境几乎没有特征的情况下特别有用。由于缺乏 RGB 信息,在大型室内环境中,远距离相机位姿估计变得困难。我们研究的关键思想是在重建管道的每个阶段利用曼哈顿场景的几何先验,目的是减少长序列中的累积注册误差和整体里程计漂移。这个想法通过局部曼哈顿帧增长和局部到全局的策略得到进一步增强,从而导致大室内场景的隐式闭环处理。我们提出的管道,即 ManhattanFusion,从平面对齐和局部姿态优化开始,在这些阶段施加曼哈顿约束来创建详细的局部段。这些段通过最小化里程计漂移来保留固有场景几何,即使在复杂和长轨迹下也是如此。最后,通过在段之间基于曼哈顿框架的注册约束下将所有局部段集成到全局体积表示中,生成最终模型。我们的算法在平均距离误差和绝对轨迹误差方面都优于仅使用深度数据的其他算法,与基于 RGB-D 的重建算法相比也非常有竞争力。此外,我们的算法在表面积覆盖率方面比最先进的算法高出 10-40%,这主要是由于通过重建管道的曼哈顿假设的有用性和有效性。

相似文献

1
ManhattanFusion: Online Dense Reconstruction of Indoor Scenes From Depth Sequences.曼哈顿融合:基于深度序列的室内场景在线稠密重建。
IEEE Trans Vis Comput Graph. 2022 Jul;28(7):2668-2681. doi: 10.1109/TVCG.2020.3036868. Epub 2022 May 26.
2
Robust Visual Odometry Leveraging Mixture of Manhattan Frames in Indoor Environments.利用室内环境中曼哈顿框架的混合实现稳健的视觉里程计。
Sensors (Basel). 2022 Nov 9;22(22):8644. doi: 10.3390/s22228644.
3
SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.基于 SLAM 的单目微创手术中密集表面重建及其在增强现实中的应用。
Comput Methods Programs Biomed. 2018 May;158:135-146. doi: 10.1016/j.cmpb.2018.02.006. Epub 2018 Feb 8.
4
Neural 3D Scene Reconstruction With Indoor Planar Priors.基于室内平面先验的神经3D场景重建
IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):6355-6366. doi: 10.1109/TPAMI.2024.3379833. Epub 2024 Aug 6.
5
PlaneFusion: Real-Time Indoor Scene Reconstruction With Planar Prior.平面融合:基于平面先验的实时室内场景重建
IEEE Trans Vis Comput Graph. 2022 Dec;28(12):4671-4684. doi: 10.1109/TVCG.2021.3099480. Epub 2022 Oct 26.
6
RGB-D SLAM Using Point-Plane Constraints for Indoor Environments.用于室内环境的基于点平面约束的RGB-D同步定位与地图构建
Sensors (Basel). 2019 Jun 17;19(12):2721. doi: 10.3390/s19122721.
7
Indoor Scene Point Cloud Registration Algorithm Based on RGB-D Camera Calibration.基于RGB-D相机标定的室内场景点云配准算法
Sensors (Basel). 2017 Aug 15;17(8):1874. doi: 10.3390/s17081874.
8
Three-Dimensional Reconstruction of Indoor Scenes Based on Implicit Neural Representation.基于隐式神经表示的室内场景三维重建
J Imaging. 2024 Sep 16;10(9):231. doi: 10.3390/jimaging10090231.
9
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.用于详细3D室内和室外建模的增强型RGB-D映射方法
Sensors (Basel). 2016 Sep 27;16(10):1589. doi: 10.3390/s16101589.
10
RGB-D SLAM with Manhattan Frame Estimation Using Orientation Relevance.基于方向相关性的曼哈顿框架估计的 RGB-D SLAM
Sensors (Basel). 2019 Mar 1;19(5):1050. doi: 10.3390/s19051050.

本文引用的文献

1
HeteroFusion: Dense Scene Reconstruction Integrating Multi-Sensors.异质融合:集成多传感器的密集场景重建
IEEE Trans Vis Comput Graph. 2020 Nov;26(11):3217-3230. doi: 10.1109/TVCG.2019.2919619. Epub 2019 May 28.
2
RGB-D SLAM with Manhattan Frame Estimation Using Orientation Relevance.基于方向相关性的曼哈顿框架估计的 RGB-D SLAM
Sensors (Basel). 2019 Mar 1;19(5):1050. doi: 10.3390/s19051050.
3
Collaborative Large-Scale Dense 3D Reconstruction with Online Inter-Agent Pose Optimisation.基于在线智能体间姿态优化的协作式大规模密集三维重建
IEEE Trans Vis Comput Graph. 2018 Nov;24(11):2895-2905. doi: 10.1109/TVCG.2018.2868533. Epub 2018 Oct 15.
4
Robust and Globally Optimal Manhattan Frame Estimation in Near Real Time.近乎实时的稳健且全局最优曼哈顿框架估计
IEEE Trans Pattern Anal Mach Intell. 2019 Mar;41(3):682-696. doi: 10.1109/TPAMI.2018.2799944. Epub 2018 Jan 30.
5
MixedFusion: Real-Time Reconstruction of an Indoor Scene with Dynamic Objects.混合融合:动态物体室内场景的实时重建
IEEE Trans Vis Comput Graph. 2018 Dec;24(12):3137-3146. doi: 10.1109/TVCG.2017.2786233. Epub 2017 Dec 28.