• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

DOT-SLAM:一种基于图优化的具有动态目标跟踪功能的立体视觉同步定位与地图构建(SLAM)系统。

DOT-SLAM: A Stereo Visual Simultaneous Localization and Mapping (SLAM) System with Dynamic Object Tracking Based on Graph Optimization.

作者信息

Zhu Yuan, An Hao, Wang Huaide, Xu Ruidong, Sun Zhipeng, Lu Ke

机构信息

School of Automotive Studies, Tongji University, Shanghai 201800, China.

Nanchang Automotive Institute of Intelligence & New Energy, Tongji University, Nanchang 330052, China.

出版信息

Sensors (Basel). 2024 Jul 18;24(14):4676. doi: 10.3390/s24144676.

DOI:10.3390/s24144676
PMID:39066073
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11280576/
Abstract

Most visual simultaneous localization and mapping (SLAM) systems are based on the assumption of a static environment in autonomous vehicles. However, when dynamic objects, particularly vehicles, occupy a large portion of the image, the localization accuracy of the system decreases significantly. To mitigate this challenge, this paper unveils DOT-SLAM, a novel stereo visual SLAM system that integrates dynamic object tracking through graph optimization. By integrating dynamic object pose estimation into the SLAM system, the system can effectively utilize both foreground and background points for ego vehicle localization and obtain a static feature points map. To rectify the inaccuracies in depth estimation from stereo disparity directly on the foreground points of dynamic objects due to their self-similarity characteristics, a coarse-to-fine depth estimation method based on camera-road plane geometry is presented. This method uses rough depth to guide fine stereo matching, thereby obtaining the 3 dimensions (3D)spatial positions of feature points on dynamic objects. Subsequently, by establishing constraints on the dynamic object's pose using the road plane and non-holonomic constraints (NHCs) of the vehicle, reducing the initial pose uncertainty of dynamic objects leads to more accurate dynamic object initialization. Finally, by considering foreground points, background points, the local road plane, the ego vehicle pose, and dynamic object poses as optimization nodes, through the establishment and joint optimization of a nonlinear model based on graph optimization, accurate six degrees of freedom (DoFs) pose estimations are obtained for both the ego vehicle and dynamic objects. Experimental validation on the KITTI-360 dataset demonstrates that DOT-SLAM effectively utilizes features from the background and dynamic objects in the environment, resulting in more accurate vehicle trajectory estimation and a static environment map. Results obtained from a real-world dataset test reinforce the effectiveness.

摘要

大多数视觉同步定位与地图构建(SLAM)系统是基于自动驾驶车辆所处静态环境这一假设的。然而,当动态物体,尤其是车辆占据图像的很大一部分时,系统的定位精度会显著下降。为了应对这一挑战,本文推出了DOT-SLAM,这是一种新颖的立体视觉SLAM系统,它通过图优化集成了动态物体跟踪功能。通过将动态物体姿态估计集成到SLAM系统中,该系统能够有效地利用前景和背景点进行自车定位,并获得静态特征点地图。由于动态物体的自相似特性,直接在其前景点上从立体视差进行深度估计会存在不准确的问题,为此提出了一种基于相机-道路平面几何的粗到精深度估计方法。该方法使用粗略深度来引导精细的立体匹配,从而获得动态物体上特征点的三维空间位置。随后,通过利用道路平面和车辆的非完整约束(NHC)对动态物体的姿态建立约束,减少动态物体的初始姿态不确定性可实现更准确的动态物体初始化。最后,通过将前景点、背景点、局部道路平面、自车姿态和动态物体姿态视为优化节点,通过建立并联合优化基于图优化的非线性模型,可获得自车和动态物体精确的六自由度(DoF)姿态估计。在KITTI-360数据集上的实验验证表明,DOT-SLAM有效地利用了环境中的背景和动态物体特征,从而实现了更准确的车辆轨迹估计和静态环境地图。从真实世界数据集测试中获得的结果进一步证实了其有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/bcd44eaaa761/sensors-24-04676-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/df23eb674a1c/sensors-24-04676-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/b8ec2865bad0/sensors-24-04676-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/77fe8c7e2cf4/sensors-24-04676-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/2c7be3793754/sensors-24-04676-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/6ca54571bc36/sensors-24-04676-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/caa82f586a27/sensors-24-04676-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/e13b419cd312/sensors-24-04676-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/5e2be2f1a327/sensors-24-04676-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/5a85e142f9e3/sensors-24-04676-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/ca694bf6cdf1/sensors-24-04676-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/974b6cfaf1a4/sensors-24-04676-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/a9a3489e0a2c/sensors-24-04676-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/e20c92c46073/sensors-24-04676-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/991076bda258/sensors-24-04676-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/d70e372f9fdd/sensors-24-04676-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/0f644f24a4b3/sensors-24-04676-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/bcd44eaaa761/sensors-24-04676-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/df23eb674a1c/sensors-24-04676-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/b8ec2865bad0/sensors-24-04676-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/77fe8c7e2cf4/sensors-24-04676-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/2c7be3793754/sensors-24-04676-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/6ca54571bc36/sensors-24-04676-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/caa82f586a27/sensors-24-04676-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/e13b419cd312/sensors-24-04676-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/5e2be2f1a327/sensors-24-04676-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/5a85e142f9e3/sensors-24-04676-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/ca694bf6cdf1/sensors-24-04676-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/974b6cfaf1a4/sensors-24-04676-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/a9a3489e0a2c/sensors-24-04676-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/e20c92c46073/sensors-24-04676-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/991076bda258/sensors-24-04676-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/d70e372f9fdd/sensors-24-04676-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/0f644f24a4b3/sensors-24-04676-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51d0/11280576/bcd44eaaa761/sensors-24-04676-g017.jpg

相似文献

1
DOT-SLAM: A Stereo Visual Simultaneous Localization and Mapping (SLAM) System with Dynamic Object Tracking Based on Graph Optimization.DOT-SLAM:一种基于图优化的具有动态目标跟踪功能的立体视觉同步定位与地图构建(SLAM)系统。
Sensors (Basel). 2024 Jul 18;24(14):4676. doi: 10.3390/s24144676.
2
RC-SLAM: Road Constrained Stereo Visual SLAM System Based on Graph Optimization.RC-SLAM:基于图优化的道路约束立体视觉同步定位与地图构建系统
Sensors (Basel). 2024 Jan 15;24(2):0. doi: 10.3390/s24020536.
3
DMS-SLAM: A General Visual SLAM System for Dynamic Scenes with Multiple Sensors.DMS-SLAM:一种用于多传感器动态场景的通用视觉同步定位与地图构建系统。
Sensors (Basel). 2019 Aug 27;19(17):3714. doi: 10.3390/s19173714.
4
OTE-SLAM: An Object Tracking Enhanced Visual SLAM System for Dynamic Environments.OTE-SLAM:一种用于动态环境的目标跟踪增强型视觉同步定位与地图构建系统。
Sensors (Basel). 2023 Sep 15;23(18):7921. doi: 10.3390/s23187921.
5
BY-SLAM: Dynamic Visual SLAM System Based on BEBLID and Semantic Information Extraction.BY-SLAM:基于BEBLID和语义信息提取的动态视觉同步定位与地图构建系统
Sensors (Basel). 2024 Jul 19;24(14):4693. doi: 10.3390/s24144693.
6
Semantic visual simultaneous localization and mapping (SLAM) using deep learning for dynamic scenes.使用深度学习的语义视觉同步定位与地图构建(SLAM)用于动态场景。
PeerJ Comput Sci. 2023 Oct 10;9:e1628. doi: 10.7717/peerj-cs.1628. eCollection 2023.
7
YPD-SLAM: A Real-Time VSLAM System for Handling Dynamic Indoor Environments.YPD-SLAM:一种用于处理动态室内环境的实时视觉同步定位与地图构建系统。
Sensors (Basel). 2022 Nov 7;22(21):8561. doi: 10.3390/s22218561.
8
DiT-SLAM: Real-Time Dense Visual-Inertial SLAM with Implicit Depth Representation and Tightly-Coupled Graph Optimization.DiT-SLAM:基于隐式深度表示和紧密耦合图优化的实时密集视觉惯性同步定位与地图构建
Sensors (Basel). 2022 Apr 28;22(9):3389. doi: 10.3390/s22093389.
9
Visual SLAM for Dynamic Environments Based on Object Detection and Optical Flow for Dynamic Object Removal.基于目标检测和光流的动态环境视觉 SLAM 及其动态目标移除。
Sensors (Basel). 2022 Oct 5;22(19):7553. doi: 10.3390/s22197553.
10
Incremental Pose Map Optimization for Monocular Vision SLAM Based on Similarity Transformation.基于相似变换的单目视觉 SLAM 增量位姿图优化。
Sensors (Basel). 2019 Nov 13;19(22):4945. doi: 10.3390/s19224945.

引用本文的文献

1
An Adaptive Threshold-Based Pixel Point Tracking Algorithm Using Reference Features Leveraging the Multi-State Constrained Kalman Filter Feature Point Triangulation Technique for Depth Mapping the Environment.一种基于自适应阈值的像素点跟踪算法,该算法利用参考特征,借助多状态约束卡尔曼滤波器特征点三角测量技术对环境进行深度映射。
Sensors (Basel). 2025 Apr 30;25(9):2849. doi: 10.3390/s25092849.
2
Advancements in Sensor Fusion for Underwater SLAM: A Review on Enhanced Navigation and Environmental Perception.水下同时定位与地图构建中传感器融合的进展:增强导航与环境感知综述
Sensors (Basel). 2024 Nov 24;24(23):7490. doi: 10.3390/s24237490.
3

本文引用的文献

1
RC-SLAM: Road Constrained Stereo Visual SLAM System Based on Graph Optimization.RC-SLAM:基于图优化的道路约束立体视觉同步定位与地图构建系统
Sensors (Basel). 2024 Jan 15;24(2):0. doi: 10.3390/s24020536.
2
OTE-SLAM: An Object Tracking Enhanced Visual SLAM System for Dynamic Environments.OTE-SLAM:一种用于动态环境的目标跟踪增强型视觉同步定位与地图构建系统。
Sensors (Basel). 2023 Sep 15;23(18):7921. doi: 10.3390/s23187921.
3
Advances in Visual Simultaneous Localisation and Mapping Techniques for Autonomous Vehicles: A Review.自动驾驶中视觉即时定位与地图构建技术的进展综述
Hyperspectral Attention Network for Object Tracking.
用于目标跟踪的高光谱注意力网络
Sensors (Basel). 2024 Sep 24;24(19):6178. doi: 10.3390/s24196178.
Sensors (Basel). 2022 Nov 18;22(22):8943. doi: 10.3390/s22228943.
4
KITTI-360: A Novel Dataset and Benchmarks for Urban Scene Understanding in 2D and 3D.KITTI-360:用于二维和三维城市场景理解的新型数据集和基准
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):3292-3310. doi: 10.1109/TPAMI.2022.3179507. Epub 2023 Feb 3.
5
RGB-D SLAM in Dynamic Environments Using Point Correlations.基于点相关性的动态环境中的RGB-D同步定位与地图构建
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):373-389. doi: 10.1109/TPAMI.2020.3010942. Epub 2021 Dec 7.
6
CoSLAM: collaborative visual SLAM in dynamic environments.协同视觉 SLAM(CoSLAM):动态环境中的协同视觉 SLAM。
IEEE Trans Pattern Anal Mach Intell. 2013 Feb;35(2):354-66. doi: 10.1109/TPAMI.2012.104.
7
Least-squares fitting of two 3-d point sets.最小二乘拟合两个三维点集。
IEEE Trans Pattern Anal Mach Intell. 1987 May;9(5):698-700. doi: 10.1109/tpami.1987.4767965.