Suppr超能文献

UNVELO:基于在线校正的无监督视觉增强激光雷达里程计。

UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction.

机构信息

Faculty of the College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China.

出版信息

Sensors (Basel). 2023 Apr 13;23(8):3967. doi: 10.3390/s23083967.

Abstract

Due to the complementary characteristics of visual and LiDAR information, these two modalities have been fused to facilitate many vision tasks. However, current studies of learning-based odometries mainly focus on either the visual or LiDAR modality, leaving visual-LiDAR odometries (VLOs) under-explored. This work proposes a new method to implement an unsupervised VLO, which adopts a LiDAR-dominant scheme to fuse the two modalities. We, therefore, refer to it as unsupervised vision-enhanced LiDAR odometry (UnVELO). It converts 3D LiDAR points into a dense vertex map via spherical projection and generates a vertex color map by colorizing each vertex with visual information. Further, a point-to-plane distance-based geometric loss and a photometric-error-based visual loss are, respectively, placed on locally planar regions and cluttered regions. Last, but not least, we designed an online pose-correction module to refine the pose predicted by the trained UnVELO during test time. In contrast to the vision-dominant fusion scheme adopted in most previous VLOs, our LiDAR-dominant method adopts the dense representations for both modalities, which facilitates the visual-LiDAR fusion. Besides, our method uses the accurate LiDAR measurements instead of the predicted noisy dense depth maps, which significantly improves the robustness to illumination variations, as well as the efficiency of the online pose correction. The experiments on the KITTI and DSEC datasets showed that our method outperformed previous two-frame-based learning methods. It was also competitive with hybrid methods that integrate a global optimization on multiple or all frames.

摘要

由于视觉和 LiDAR 信息的互补特性,这两种模态已经融合在一起,以促进许多视觉任务的完成。然而,基于学习的里程计的当前研究主要集中在视觉或 LiDAR 模态上,视觉-LiDAR 里程计(VLO)的研究还不够充分。本工作提出了一种新的方法来实现无监督的 VLO,它采用 LiDAR 主导的方案来融合两种模态。因此,我们称之为无监督视觉增强 LiDAR 里程计(UnVELO)。它通过球形投影将 3D LiDAR 点转换为密集的顶点图,并通过用视觉信息对每个顶点进行着色来生成顶点颜色图。此外,基于点到平面距离的几何损失和基于光度误差的视觉损失分别放置在局部平面区域和杂乱区域上。最后但并非最不重要的是,我们设计了一个在线姿态校正模块,在测试时通过该模块来修正由经过训练的 UnVELO 预测的姿态。与大多数先前的 VLO 中采用的视觉主导融合方案不同,我们的 LiDAR 主导方法对两种模态都采用密集表示,这有利于视觉-LiDAR 融合。此外,我们的方法使用准确的 LiDAR 测量值而不是预测的嘈杂密集深度图,这显著提高了对光照变化的鲁棒性,以及在线姿态校正的效率。在 KITTI 和 DSEC 数据集上的实验表明,我们的方法优于以前的两帧学习方法。它也与集成了多个或所有帧上的全局优化的混合方法具有竞争力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/887f/10142647/c1514e14431a/sensors-23-03967-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验