• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

OTE-SLAM:一种用于动态环境的目标跟踪增强型视觉同步定位与地图构建系统。

OTE-SLAM: An Object Tracking Enhanced Visual SLAM System for Dynamic Environments.

作者信息

Chang Yimeng, Hu Jun, Xu Shiyou

机构信息

School of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen 518107, China.

出版信息

Sensors (Basel). 2023 Sep 15;23(18):7921. doi: 10.3390/s23187921.

DOI:10.3390/s23187921
PMID:37765978
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10534583/
Abstract

With the rapid development of autonomous driving and robotics applications in recent years, visual Simultaneous Localization and Mapping (SLAM) has become a hot research topic. The majority of visual SLAM systems relies on the assumption of scene rigidity, which may not always hold true in real applications. In dynamic environments, SLAM systems, without accounting for dynamic objects, will easily fail to estimate the camera pose. Some existing methods attempt to address this issue by simply excluding the dynamic features lying in moving objects. But this may lead to a shortage of features for tracking. To tackle this problem, we propose OTE-SLAM, an object tracking enhanced visual SLAM system, which not only tracks the camera motion, but also tracks the movement of dynamic objects. Furthermore, we perform joint optimization of both the camera pose and object 3D position, enabling a mutual benefit between visual SLAM and object tracking. The results of experiences demonstrate that the proposed approach improves the accuracy of the SLAM system in challenging dynamic environments. The improvements include a maximum reduction in both absolute trajectory error and relative trajectory error by 22% and 33%, respectively.

摘要

近年来,随着自动驾驶和机器人应用的迅速发展,视觉同步定位与建图(SLAM)已成为一个热门研究课题。大多数视觉SLAM系统依赖于场景刚性的假设,而这在实际应用中可能并不总是成立。在动态环境中,SLAM系统若不考虑动态物体,将很容易无法估计相机位姿。一些现有方法试图通过简单地排除移动物体中的动态特征来解决这个问题。但这可能会导致跟踪特征的短缺。为了解决这个问题,我们提出了OTE-SLAM,一种物体跟踪增强的视觉SLAM系统,它不仅跟踪相机运动,还跟踪动态物体的运动。此外,我们对相机位姿和物体三维位置进行联合优化,实现视觉SLAM和物体跟踪之间的互利。实验结果表明,所提出的方法在具有挑战性的动态环境中提高了SLAM系统的精度。改进包括绝对轨迹误差和相对轨迹误差分别最大降低22%和33%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/0ec33b60bdf6/sensors-23-07921-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/cf591e30b9cb/sensors-23-07921-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/a469af32705c/sensors-23-07921-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/84fc4aa82e1b/sensors-23-07921-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/bfaa9c1adabc/sensors-23-07921-g004a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/378a32518aec/sensors-23-07921-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/79ef42cf9a69/sensors-23-07921-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/cbb9ceab7971/sensors-23-07921-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/a4490a405dc3/sensors-23-07921-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/0ec33b60bdf6/sensors-23-07921-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/cf591e30b9cb/sensors-23-07921-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/a469af32705c/sensors-23-07921-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/84fc4aa82e1b/sensors-23-07921-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/bfaa9c1adabc/sensors-23-07921-g004a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/378a32518aec/sensors-23-07921-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/79ef42cf9a69/sensors-23-07921-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/cbb9ceab7971/sensors-23-07921-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/a4490a405dc3/sensors-23-07921-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6806/10534583/0ec33b60bdf6/sensors-23-07921-g009.jpg

相似文献

1
OTE-SLAM: An Object Tracking Enhanced Visual SLAM System for Dynamic Environments.OTE-SLAM:一种用于动态环境的目标跟踪增强型视觉同步定位与地图构建系统。
Sensors (Basel). 2023 Sep 15;23(18):7921. doi: 10.3390/s23187921.
2
DOT-SLAM: A Stereo Visual Simultaneous Localization and Mapping (SLAM) System with Dynamic Object Tracking Based on Graph Optimization.DOT-SLAM:一种基于图优化的具有动态目标跟踪功能的立体视觉同步定位与地图构建(SLAM)系统。
Sensors (Basel). 2024 Jul 18;24(14):4676. doi: 10.3390/s24144676.
3
DOE-SLAM: Dynamic Object Enhanced Visual SLAM.DOE-SLAM:动态对象增强视觉同步定位与地图构建
Sensors (Basel). 2021 Apr 29;21(9):3091. doi: 10.3390/s21093091.
4
Semantic visual simultaneous localization and mapping (SLAM) using deep learning for dynamic scenes.使用深度学习的语义视觉同步定位与地图构建(SLAM)用于动态场景。
PeerJ Comput Sci. 2023 Oct 10;9:e1628. doi: 10.7717/peerj-cs.1628. eCollection 2023.
5
Visual SLAM for Dynamic Environments Based on Object Detection and Optical Flow for Dynamic Object Removal.基于目标检测和光流的动态环境视觉 SLAM 及其动态目标移除。
Sensors (Basel). 2022 Oct 5;22(19):7553. doi: 10.3390/s22197553.
6
BY-SLAM: Dynamic Visual SLAM System Based on BEBLID and Semantic Information Extraction.BY-SLAM:基于BEBLID和语义信息提取的动态视觉同步定位与地图构建系统
Sensors (Basel). 2024 Jul 19;24(14):4693. doi: 10.3390/s24144693.
7
KISS-Keep It Static SLAMMOT-The Cost of Integrating Moving Object Tracking into an EKF-SLAM Algorithm.KISS - 保持静态 SLAMMOT - 将运动目标跟踪集成到扩展卡尔曼滤波同步定位与地图构建(EKF - SLAM)算法中的成本
Sensors (Basel). 2024 Sep 4;24(17):5764. doi: 10.3390/s24175764.
8
SLAM in Dynamic Environments: A Deep Learning Approach for Moving Object Tracking Using ML-RANSAC Algorithm.动态环境中的同时定位与地图构建:一种使用ML-RANSAC算法进行移动目标跟踪的深度学习方法。
Sensors (Basel). 2019 Aug 26;19(17):3699. doi: 10.3390/s19173699.
9
ADM-SLAM: Accurate and Fast Dynamic Visual SLAM with Adaptive Feature Point Extraction, Deeplabv3pro, and Multi-View Geometry.ADM-SLAM:基于自适应特征点提取、深度卷积神经网络语义分割模型Deeplabv3pro和多视图几何的精确快速动态视觉同步定位与地图构建
Sensors (Basel). 2024 Jun 2;24(11):3578. doi: 10.3390/s24113578.
10
A Semantic SLAM System for Catadioptric Panoramic Cameras in Dynamic Environments.用于动态环境下折反射全景相机的语义 SLAM 系统。
Sensors (Basel). 2021 Sep 1;21(17):5889. doi: 10.3390/s21175889.

引用本文的文献

1
Monocular Object-Level SLAM Enhanced by Joint Semantic Segmentation and Depth Estimation.通过联合语义分割和深度估计增强的单目物体级同步定位与地图构建
Sensors (Basel). 2025 Mar 27;25(7):2110. doi: 10.3390/s25072110.
2
DOT-SLAM: A Stereo Visual Simultaneous Localization and Mapping (SLAM) System with Dynamic Object Tracking Based on Graph Optimization.DOT-SLAM:一种基于图优化的具有动态目标跟踪功能的立体视觉同步定位与地图构建(SLAM)系统。
Sensors (Basel). 2024 Jul 18;24(14):4676. doi: 10.3390/s24144676.

本文引用的文献

1
An Adaptive ORB-SLAM3 System for Outdoor Dynamic Environments.一种用于室外动态环境的自适应 ORB-SLAM3 系统。
Sensors (Basel). 2023 Jan 25;23(3):1359. doi: 10.3390/s23031359.
2
Multi-Objective Location and Mapping Based on Deep Learning and Visual Slam.基于深度学习和视觉 slam 的多目标定位与建图
Sensors (Basel). 2022 Oct 6;22(19):7576. doi: 10.3390/s22197576.
3
DeepSort: deep convolutional networks for sorting haploid maize seeds.DeepSort:用于分拣单倍体玉米种子的深度卷积网络。
BMC Bioinformatics. 2018 Aug 13;19(Suppl 9):289. doi: 10.1186/s12859-018-2267-2.
4
Direct Sparse Odometry.直接稀疏里程计。
IEEE Trans Pattern Anal Mach Intell. 2018 Mar;40(3):611-625. doi: 10.1109/TPAMI.2017.2658577. Epub 2017 Apr 12.
5
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.SegNet:一种用于图像分割的深度卷积编解码器架构。
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2481-2495. doi: 10.1109/TPAMI.2016.2644615. Epub 2017 Jan 2.