• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于城市环境中车辆位置跟踪的视觉里程计与地点识别融合

Visual Odometry and Place Recognition Fusion for Vehicle Position Tracking in Urban Environments.

作者信息

Ouerghi Safa, Boutteau Rémi, Savatier Xavier, Tlili Fethi

机构信息

Carthage University, SUP'COM, GRESCOM, El Ghazela 2083, Tunisia.

Normandie University, UNIROUEN, ESIGELEC, IRSEEM, 76000 Rouen, France.

出版信息

Sensors (Basel). 2018 Mar 22;18(4):939. doi: 10.3390/s18040939.

DOI:10.3390/s18040939
PMID:29565310
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5948842/
Abstract

In this paper, we address the problem of vehicle localization in urban environments. We rely on visual odometry, calculating the incremental motion, to track the position of the vehicle and on place recognition to correct the accumulated drift of visual odometry, whenever a location is recognized. The algorithm used as a place recognition module is SeqSLAM, addressing challenging environments and achieving quite remarkable results. Specifically, we perform the long-term navigation of a vehicle based on the fusion of visual odometry and SeqSLAM. The template library for this latter is created online using navigation information from the visual odometry module. That is, when a location is recognized, the corresponding information is used as an observation of the filter. The fusion is done using the EKF and the UKF, the well-known nonlinear state estimation methods, to assess the superior alternative. The algorithm is evaluated using the KITTI dataset and the results show the reduction of the navigation errors by loop-closure detection. The overall position error of visual odometery with SeqSLAM is 0.22% of the trajectory, which is much smaller than the navigation errors of visual odometery alone 0.45%. In addition, despite the superiority of the UKF in a variety of estimation problems, our results indicate that the UKF performs as efficiently as the EKF at the expense of an additional computational overhead. This leads to the conclusion that the EKF is a better choice for fusing visual odometry and SeqSlam in a long-term navigation context.

摘要

在本文中,我们探讨城市环境中的车辆定位问题。我们依靠视觉里程计来计算增量运动,以跟踪车辆的位置,并依靠地点识别来校正视觉里程计累积的漂移,只要识别到某个位置。用作地点识别模块的算法是SeqSLAM,它能应对具有挑战性的环境并取得相当显著的成果。具体而言,我们基于视觉里程计和SeqSLAM的融合来执行车辆的长期导航。后者的模板库是使用来自视觉里程计模块的导航信息在线创建的。也就是说,当识别到一个位置时,相应的信息被用作滤波器的观测值。融合使用扩展卡尔曼滤波器(EKF)和无迹卡尔曼滤波器(UKF)这两种著名的非线性状态估计方法来评估哪种方法更优。该算法使用KITTI数据集进行评估,结果表明通过闭环检测减少了导航误差。结合SeqSLAM的视觉里程计的总体位置误差为轨迹的0.22%,这比单独的视觉里程计的导航误差0.45%小得多。此外,尽管UKF在各种估计问题中具有优势,但我们的结果表明,UKF在有额外计算开销的情况下与EKF的效率相当。由此得出结论,在长期导航背景下,EKF是融合视觉里程计和SeqSlam的更好选择。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/8ff0a4fc962c/sensors-18-00939-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/91f3a64f3e2b/sensors-18-00939-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/29c05e11c302/sensors-18-00939-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/d5ff2378b2ba/sensors-18-00939-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/b3c910d555e2/sensors-18-00939-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/52349f161b13/sensors-18-00939-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/a714a62230ed/sensors-18-00939-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/7f7e0ab8dfb8/sensors-18-00939-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/c9296c1c61f4/sensors-18-00939-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/8ff0a4fc962c/sensors-18-00939-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/91f3a64f3e2b/sensors-18-00939-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/29c05e11c302/sensors-18-00939-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/d5ff2378b2ba/sensors-18-00939-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/b3c910d555e2/sensors-18-00939-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/52349f161b13/sensors-18-00939-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/a714a62230ed/sensors-18-00939-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/7f7e0ab8dfb8/sensors-18-00939-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/c9296c1c61f4/sensors-18-00939-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a682/5948842/8ff0a4fc962c/sensors-18-00939-g009.jpg

相似文献

1
Visual Odometry and Place Recognition Fusion for Vehicle Position Tracking in Urban Environments.用于城市环境中车辆位置跟踪的视觉里程计与地点识别融合
Sensors (Basel). 2018 Mar 22;18(4):939. doi: 10.3390/s18040939.
2
A Novel Online Approach for Drift Covariance Estimation of Odometries Used in Intelligent Vehicle Localization.一种用于智能车辆定位的里程计漂移协方差估计的新型在线方法。
Sensors (Basel). 2019 Nov 26;19(23):5178. doi: 10.3390/s19235178.
3
A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.基于远距离立体视觉、惯性测量单元、全球定位系统和气压传感器的多传感器融合微型飞行器状态估计
Sensors (Basel). 2016 Dec 22;17(1):11. doi: 10.3390/s17010011.
4
Event-based feature tracking in a visual inertial odometry framework.视觉惯性里程计框架中基于事件的特征跟踪
Front Robot AI. 2023 Feb 14;10:994488. doi: 10.3389/frobt.2023.994488. eCollection 2023.
5
On-line Smoothing and Error Modelling for Integration of GNSS and Visual Odometry.GNSS 与视觉里程计集成的在线平滑和误差建模。
Sensors (Basel). 2019 Nov 29;19(23):5259. doi: 10.3390/s19235259.
6
Landmark-Based Scale Estimation and Correction of Visual Inertial Odometry for VTOL UAVs in a GPS-Denied Environment.基于地标点的视觉惯性里程计尺度估计与校正方法,用于 GPS 拒止环境下的 VTOL 无人机。
Sensors (Basel). 2022 Dec 9;22(24):9654. doi: 10.3390/s22249654.
7
Road-Network-Map-Assisted Vehicle Positioning Based on Pose Graph Optimization.基于位姿图优化的道路网络地图辅助车辆定位
Sensors (Basel). 2023 Aug 31;23(17):7581. doi: 10.3390/s23177581.
8
Radar and Visual Odometry Integrated System Aided Navigation for UAVS in GNSS Denied Environment.雷达和视觉里程计集成系统辅助 GNSS 拒止环境下的无人机导航。
Sensors (Basel). 2018 Aug 23;18(9):2776. doi: 10.3390/s18092776.
9
Unsupervised Deep Visual-Inertial Odometry with Online Error Correction for RGB-D Imagery.基于 RGB-D 图像的无监督深度视觉惯性里程计与在线误差校正
IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2478-2493. doi: 10.1109/TPAMI.2019.2909895. Epub 2019 Apr 15.
10
RTLIO: Real-Time LiDAR-Inertial Odometry and Mapping for UAVs.RTLIO:无人机实时激光雷达惯性里程计与测绘
Sensors (Basel). 2021 Jun 8;21(12):3955. doi: 10.3390/s21123955.

引用本文的文献

1
Road-Network-Map-Assisted Vehicle Positioning Based on Pose Graph Optimization.基于位姿图优化的道路网络地图辅助车辆定位
Sensors (Basel). 2023 Aug 31;23(17):7581. doi: 10.3390/s23177581.
2
Large-Scale Place Recognition Based on Camera-LiDAR Fused Descriptor.基于相机-激光雷达融合描述符的大规模场所识别
Sensors (Basel). 2020 May 19;20(10):2870. doi: 10.3390/s20102870.
3
ConvNet and LSH-Based Visual Localization Using Localized Sequence Matching.基于卷积神经网络和局部敏感哈希的视觉定位:使用局部序列匹配

本文引用的文献

1
PHROG: A Multimodal Feature for Place Recognition.PHROG:一种用于地点识别的多模态特征。
Sensors (Basel). 2017 May 20;17(5):1167. doi: 10.3390/s17051167.
2
Map-Based Probabilistic Visual Self-Localization.基于地图的概率视觉自定位。
IEEE Trans Pattern Anal Mach Intell. 2016 Apr;38(4):652-65. doi: 10.1109/TPAMI.2015.2453975.
Sensors (Basel). 2019 May 28;19(11):2439. doi: 10.3390/s19112439.
4
VINS-MKF:A Tightly-Coupled Multi-Keyframe Visual-Inertial Odometry for Accurate and Robust State Estimation.VINS-MKF:一种紧耦合多关键帧视觉惯性里程计,用于实现精确和鲁棒的状态估计。
Sensors (Basel). 2018 Nov 19;18(11):4036. doi: 10.3390/s18114036.