• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

UNVELO:基于在线校正的无监督视觉增强激光雷达里程计。

UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction.

机构信息

Faculty of the College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China.

出版信息

Sensors (Basel). 2023 Apr 13;23(8):3967. doi: 10.3390/s23083967.

DOI:10.3390/s23083967
PMID:37112307
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10142647/
Abstract

Due to the complementary characteristics of visual and LiDAR information, these two modalities have been fused to facilitate many vision tasks. However, current studies of learning-based odometries mainly focus on either the visual or LiDAR modality, leaving visual-LiDAR odometries (VLOs) under-explored. This work proposes a new method to implement an unsupervised VLO, which adopts a LiDAR-dominant scheme to fuse the two modalities. We, therefore, refer to it as unsupervised vision-enhanced LiDAR odometry (UnVELO). It converts 3D LiDAR points into a dense vertex map via spherical projection and generates a vertex color map by colorizing each vertex with visual information. Further, a point-to-plane distance-based geometric loss and a photometric-error-based visual loss are, respectively, placed on locally planar regions and cluttered regions. Last, but not least, we designed an online pose-correction module to refine the pose predicted by the trained UnVELO during test time. In contrast to the vision-dominant fusion scheme adopted in most previous VLOs, our LiDAR-dominant method adopts the dense representations for both modalities, which facilitates the visual-LiDAR fusion. Besides, our method uses the accurate LiDAR measurements instead of the predicted noisy dense depth maps, which significantly improves the robustness to illumination variations, as well as the efficiency of the online pose correction. The experiments on the KITTI and DSEC datasets showed that our method outperformed previous two-frame-based learning methods. It was also competitive with hybrid methods that integrate a global optimization on multiple or all frames.

摘要

由于视觉和 LiDAR 信息的互补特性,这两种模态已经融合在一起,以促进许多视觉任务的完成。然而,基于学习的里程计的当前研究主要集中在视觉或 LiDAR 模态上,视觉-LiDAR 里程计(VLO)的研究还不够充分。本工作提出了一种新的方法来实现无监督的 VLO,它采用 LiDAR 主导的方案来融合两种模态。因此,我们称之为无监督视觉增强 LiDAR 里程计(UnVELO)。它通过球形投影将 3D LiDAR 点转换为密集的顶点图,并通过用视觉信息对每个顶点进行着色来生成顶点颜色图。此外,基于点到平面距离的几何损失和基于光度误差的视觉损失分别放置在局部平面区域和杂乱区域上。最后但并非最不重要的是,我们设计了一个在线姿态校正模块,在测试时通过该模块来修正由经过训练的 UnVELO 预测的姿态。与大多数先前的 VLO 中采用的视觉主导融合方案不同,我们的 LiDAR 主导方法对两种模态都采用密集表示,这有利于视觉-LiDAR 融合。此外,我们的方法使用准确的 LiDAR 测量值而不是预测的嘈杂密集深度图,这显著提高了对光照变化的鲁棒性,以及在线姿态校正的效率。在 KITTI 和 DSEC 数据集上的实验表明,我们的方法优于以前的两帧学习方法。它也与集成了多个或所有帧上的全局优化的混合方法具有竞争力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/4f32e96cb9a2/sensors-23-03967-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/c1514e14431a/sensors-23-03967-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/0f3ca9b9b6a4/sensors-23-03967-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/fd57a757cc6a/sensors-23-03967-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/30aebfceaf40/sensors-23-03967-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/d350c9f6d1b0/sensors-23-03967-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/7d97db97ad21/sensors-23-03967-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/0bcdfccd4989/sensors-23-03967-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/4f32e96cb9a2/sensors-23-03967-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/c1514e14431a/sensors-23-03967-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/0f3ca9b9b6a4/sensors-23-03967-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/fd57a757cc6a/sensors-23-03967-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/30aebfceaf40/sensors-23-03967-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/d350c9f6d1b0/sensors-23-03967-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/7d97db97ad21/sensors-23-03967-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/0bcdfccd4989/sensors-23-03967-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/4f32e96cb9a2/sensors-23-03967-g008.jpg

相似文献

1
UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction.UNVELO:基于在线校正的无监督视觉增强激光雷达里程计。
Sensors (Basel). 2023 Apr 13;23(8):3967. doi: 10.3390/s23083967.
2
SDV-LOAM: Semi-Direct Visual-LiDAR Odometry and Mapping.SDV-LOAM:半直接视觉激光雷达里程计与建图
IEEE Trans Pattern Anal Mach Intell. 2023 Sep;45(9):11203-11220. doi: 10.1109/TPAMI.2023.3262817. Epub 2023 Aug 7.
3
SLAM and 3D Semantic Reconstruction Based on the Fusion of Lidar and Monocular Vision.基于激光雷达和单目视觉融合的 SLAM 和 3D 语义重建。
Sensors (Basel). 2023 Jan 29;23(3):1502. doi: 10.3390/s23031502.
4
VA-LOAM: Visual Assist LiDAR Odometry and Mapping for Accurate Autonomous Navigation.VA-LOAM:用于精确自主导航的视觉辅助激光雷达里程计与建图
Sensors (Basel). 2024 Jun 13;24(12):3831. doi: 10.3390/s24123831.
5
Marked-LIEO: Visual Marker-Aided LiDAR/IMU/Encoder Integrated Odometry.显著标记辅助激光雷达/惯性测量单元/编码器集成里程计
Sensors (Basel). 2022 Jun 23;22(13):4749. doi: 10.3390/s22134749.
6
Efficient 3D Deep LiDAR Odometry.高效的3D深度激光雷达里程计
IEEE Trans Pattern Anal Mach Intell. 2023 May;45(5):5749-5765. doi: 10.1109/TPAMI.2022.3207015. Epub 2023 Apr 3.
7
VILO SLAM: Tightly Coupled Binocular Vision-Inertia SLAM Combined with LiDAR.VILO SLAM:紧耦合双目视觉惯性 SLAM 与激光雷达相结合。
Sensors (Basel). 2023 May 9;23(10):4588. doi: 10.3390/s23104588.
8
Stereo Visual Odometry Pose Correction through Unsupervised Deep Learning.通过无监督深度学习进行立体视觉里程计位姿校正。
Sensors (Basel). 2021 Jul 11;21(14):4735. doi: 10.3390/s21144735.
9
Robust Localization of Industrial Park UGV and Prior Map Maintenance.工业园区无人地面车辆的稳健定位及先验地图维护
Sensors (Basel). 2023 Aug 6;23(15):6987. doi: 10.3390/s23156987.
10
RAUM-VO: Rotational Adjusted Unsupervised Monocular Visual Odometry.RAUM-VO:旋转调整无监督单目视觉里程计。
Sensors (Basel). 2022 Mar 30;22(7):2651. doi: 10.3390/s22072651.

本文引用的文献

1
Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation.面向自动驾驶车辆定位的可解释相机和激光雷达数据融合。
Sensors (Basel). 2022 Oct 20;22(20):8021. doi: 10.3390/s22208021.
2
Efficient 3D Deep LiDAR Odometry.高效的3D深度激光雷达里程计
IEEE Trans Pattern Anal Mach Intell. 2023 May;45(5):5749-5765. doi: 10.1109/TPAMI.2022.3207015. Epub 2023 Apr 3.
3
Least-squares fitting of two 3-d point sets.最小二乘拟合两个三维点集。
IEEE Trans Pattern Anal Mach Intell. 1987 May;9(5):698-700. doi: 10.1109/tpami.1987.4767965.