• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于点线特征的鲁棒RGB-D SLAM用于低纹理场景

Robust RGB-D SLAM Using Point and Line Features for Low Textured Scene.

作者信息

Zou Yajing, Eldemiry Amr, Li Yaxin, Chen Wu

机构信息

Shenzhen Research Institute, The Hong Kong Polytechnic University, Shenzhen 518057, China.

Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong 999077, China.

出版信息

Sensors (Basel). 2020 Sep 2;20(17):4984. doi: 10.3390/s20174984.

DOI:10.3390/s20174984
PMID:32887486
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7506666/
Abstract

Three-dimensional (3D) reconstruction using RGB-D camera with simultaneous color image and depth information is attractive as it can significantly reduce the cost of equipment and time for data collection. Point feature is commonly used for aligning two RGB-D frames. Due to lacking reliable point features, RGB-D simultaneous localization and mapping (SLAM) is easy to fail in low textured scenes. To overcome the problem, this paper proposes a robust RGB-D SLAM system fusing both points and lines, because lines can provide robust geometry constraints when points are insufficient. To comprehensively fuse line constraints, we combine 2D and 3D line reprojection error with point reprojection error in a novel cost function. To solve the cost function and filter out wrong feature matches, we build a robust pose solver using the Gauss-Newton method and Chi-Square test. To correct the drift of camera poses, we maintain a sliding-window framework to update the keyframe poses and related features. We evaluate the proposed system on both public datasets and real-world experiments. It is demonstrated that it is comparable to or better than state-of-the-art methods in consideration with both accuracy and robustness.

摘要

使用具有同步彩色图像和深度信息的RGB-D相机进行三维(3D)重建很有吸引力,因为它可以显著降低设备成本和数据采集时间。点特征通常用于对齐两个RGB-D帧。由于缺乏可靠的点特征,RGB-D同步定位与地图构建(SLAM)在低纹理场景中很容易失败。为了克服这个问题,本文提出了一种融合点和线的鲁棒RGB-D SLAM系统,因为当点不足时,线可以提供鲁棒的几何约束。为了全面融合线约束,我们在一个新颖的代价函数中结合了二维和三维线重投影误差与点重投影误差。为了解代价函数并滤除错误的特征匹配,我们使用高斯-牛顿法和卡方检验构建了一个鲁棒的位姿求解器。为了校正相机位姿的漂移,我们维护一个滑动窗口框架来更新关键帧位姿和相关特征。我们在公共数据集和实际实验上对所提出的系统进行了评估。结果表明,在准确性和鲁棒性方面,它与现有最先进的方法相当或更好。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/c5069b0a6782/sensors-20-04984-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/ec779cc65bd7/sensors-20-04984-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/8edfd43ec375/sensors-20-04984-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/8e152677f54a/sensors-20-04984-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/70b7c2e823d9/sensors-20-04984-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/afc14a914516/sensors-20-04984-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/4193eb724604/sensors-20-04984-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/b1c30dbdd512/sensors-20-04984-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/e4e64f3e1a42/sensors-20-04984-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/c3006c7fbc76/sensors-20-04984-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/fb4f14c98e81/sensors-20-04984-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/c5069b0a6782/sensors-20-04984-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/ec779cc65bd7/sensors-20-04984-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/8edfd43ec375/sensors-20-04984-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/8e152677f54a/sensors-20-04984-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/70b7c2e823d9/sensors-20-04984-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/afc14a914516/sensors-20-04984-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/4193eb724604/sensors-20-04984-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/b1c30dbdd512/sensors-20-04984-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/e4e64f3e1a42/sensors-20-04984-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/c3006c7fbc76/sensors-20-04984-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/fb4f14c98e81/sensors-20-04984-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9a68/7506666/c5069b0a6782/sensors-20-04984-g011.jpg

相似文献

1
Robust RGB-D SLAM Using Point and Line Features for Low Textured Scene.基于点线特征的鲁棒RGB-D SLAM用于低纹理场景
Sensors (Basel). 2020 Sep 2;20(17):4984. doi: 10.3390/s20174984.
2
Multi-Feature Nonlinear Optimization Motion Estimation Based on RGB-D and Inertial Fusion.基于RGB-D与惯性融合的多特征非线性优化运动估计
Sensors (Basel). 2020 Aug 19;20(17):4666. doi: 10.3390/s20174666.
3
Point-Plane SLAM Using Supposed Planes for Indoor Environments.使用假定平面的点-平面同步定位与地图构建用于室内环境
Sensors (Basel). 2019 Sep 2;19(17):3795. doi: 10.3390/s19173795.
4
Dense RGB-D SLAM with Multiple Cameras.多相机稠密 RGB-D SLAM。
Sensors (Basel). 2018 Jul 2;18(7):2118. doi: 10.3390/s18072118.
5
RGB-D SLAM Using Point-Plane Constraints for Indoor Environments.用于室内环境的基于点平面约束的RGB-D同步定位与地图构建
Sensors (Basel). 2019 Jun 17;19(12):2721. doi: 10.3390/s19122721.
6
Robust Visual Odometry Leveraging Mixture of Manhattan Frames in Indoor Environments.利用室内环境中曼哈顿框架的混合实现稳健的视觉里程计。
Sensors (Basel). 2022 Nov 9;22(22):8644. doi: 10.3390/s22228644.
7
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.用于详细3D室内和室外建模的增强型RGB-D映射方法
Sensors (Basel). 2016 Sep 27;16(10):1589. doi: 10.3390/s16101589.
8
SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.基于 SLAM 的单目微创手术中密集表面重建及其在增强现实中的应用。
Comput Methods Programs Biomed. 2018 May;158:135-146. doi: 10.1016/j.cmpb.2018.02.006. Epub 2018 Feb 8.
9
Robust and Efficient CPU-Based RGB-D Scene Reconstruction.基于 CPU 的鲁棒高效 RGB-D 场景重建。
Sensors (Basel). 2018 Oct 28;18(11):3652. doi: 10.3390/s18113652.
10
RGB-D Object SLAM Using Quadrics for Indoor Environments.基于二次曲面的室内 RGB-D 目标 SLAM 方法
Sensors (Basel). 2020 Sep 9;20(18):5150. doi: 10.3390/s20185150.

引用本文的文献

1
Unsupervised Monocular Depth and Camera Pose Estimation with Multiple Masks and Geometric Consistency Constraints.基于多掩码和几何一致性约束的无监督单目深度与相机位姿估计
Sensors (Basel). 2023 Jun 4;23(11):5329. doi: 10.3390/s23115329.
2
Autonomous Exploration of Unknown Indoor Environments for High-Quality Mapping Using Feature-Based RGB-D SLAM.基于特征的 RGB-D SLAM 用于高质量建图的未知室内环境自主探索。
Sensors (Basel). 2022 Jul 7;22(14):5117. doi: 10.3390/s22145117.
3
Unsupervised Learning of Monocular Depth and Ego-Motion with Optical Flow Features and Multiple Constraints.

本文引用的文献

1
Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking.RGB-D 单向跟踪的混合对应几何积分。
Sensors (Basel). 2018 May 1;18(5):1385. doi: 10.3390/s18051385.
2
PL-VIO: Tightly-Coupled Monocular Visual-Inertial Odometry Using Point and Line Features.PL-VIO:使用点和线特征的紧密耦合单目视觉惯性里程计
Sensors (Basel). 2018 Apr 10;18(4):1159. doi: 10.3390/s18041159.
3
Direct Sparse Odometry.直接稀疏里程计。
基于光流特征和多种约束的单目深度和自身运动的无监督学习。
Sensors (Basel). 2022 Feb 11;22(4):1383. doi: 10.3390/s22041383.
4
IT-SVO: Improved Semi-Direct Monocular Visual Odometry Combined with JS Divergence in Restricted Mobile Devices.IT-SVO:受限移动设备中结合JS散度的改进型半直接单目视觉里程计
Sensors (Basel). 2021 Mar 12;21(6):2025. doi: 10.3390/s21062025.
5
A Multi-Feature Fusion Slam System Attaching Semantic In-Variant to Points and Lines.一种将语义不变量附加到点和线上的多特征融合即时定位与地图构建系统。
Sensors (Basel). 2021 Feb 8;21(4):1196. doi: 10.3390/s21041196.
6
Texture Synthesis Repair of RealSense D435i Depth Images with Object-Oriented RGB Image Segmentation.基于面向对象RGB图像分割的RealSense D435i深度图像纹理合成修复
Sensors (Basel). 2020 Nov 24;20(23):6725. doi: 10.3390/s20236725.
IEEE Trans Pattern Anal Mach Intell. 2018 Mar;40(3):611-625. doi: 10.1109/TPAMI.2017.2658577. Epub 2017 Apr 12.