Suppr超能文献

用于动态场景的激光雷达-360度RGB相机-360度热成像相机无靶标校准

LiDAR-360 RGB Camera-360 Thermal Camera Targetless Calibration for Dynamic Situations.

作者信息

Tran Khanh Bao, Carballo Alexander, Takeda Kazuya

机构信息

Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan.

Faculty of Engineering and Graduate School of Engineering, Gifu University, 1-1 Yanagido, Gifu City 501-1193, Japan.

出版信息

Sensors (Basel). 2024 Nov 10;24(22):7199. doi: 10.3390/s24227199.

Abstract

Integrating multiple types of sensors into autonomous systems, such as cars and robots, has become a widely adopted approach in modern technology. Among these sensors, RGB cameras, thermal cameras, and LiDAR are particularly valued for their ability to provide comprehensive environmental data. However, despite their advantages, current research primarily focuses on the one or combination of two sensors at a time. The full potential of utilizing all three sensors is often neglected. One key challenge is the ego-motion compensation of data in dynamic situations, which results from the rotational nature of the LiDAR sensor, and the blind spots of standard cameras due to their limited field of view. To resolve this problem, this paper proposes a novel method for the simultaneous registration of LiDAR, panoramic RGB cameras, and panoramic thermal cameras in dynamic environments without the need for calibration targets. Initially, essential features from RGB images, thermal data, and LiDAR point clouds are extracted through a novel method, designed to capture significant raw data characteristics. These extracted features then serve as a foundation for ego-motion compensation, optimizing the initial dataset. Subsequently, the raw features can be further refined to enhance calibration accuracy, achieving more precise alignment results. The results of the paper demonstrate the effectiveness of this approach in enhancing multiple sensor calibration compared to other ways. In the case of a high speed of around 9 m/s, some situations can improve the accuracy about 30 percent higher for LiDAR and Camera calibration. The proposed method has the potential to significantly improve the reliability and accuracy of autonomous systems in real-world scenarios, particularly under challenging environmental conditions.

摘要

将多种类型的传感器集成到自动驾驶系统中,如汽车和机器人,已成为现代技术中一种广泛采用的方法。在这些传感器中,RGB相机、热成像相机和激光雷达因其能够提供全面的环境数据而备受重视。然而,尽管它们具有优势,但目前的研究主要一次聚焦于一种传感器或两种传感器的组合。同时利用所有三种传感器的全部潜力常常被忽视。一个关键挑战是动态情况下数据的自我运动补偿,这是由激光雷达传感器的旋转特性以及标准相机由于有限视野而存在的盲点导致的。为了解决这个问题,本文提出了一种在动态环境中同时配准激光雷达、全景RGB相机和全景热成像相机的新方法,无需校准目标。首先,通过一种新颖的方法从RGB图像、热数据和激光雷达点云中提取基本特征,该方法旨在捕捉重要的原始数据特征。这些提取的特征随后作为自我运动补偿的基础,优化初始数据集。随后,可以进一步细化原始特征以提高校准精度,从而获得更精确的对齐结果。本文的结果证明了这种方法在增强多传感器校准方面相对于其他方法的有效性。在速度约为9米/秒的情况下,某些情况下激光雷达和相机校准的精度可提高约30%。所提出的方法有可能显著提高自动驾驶系统在现实场景中的可靠性和准确性,特别是在具有挑战性的环境条件下。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9316/11598782/e422391257a8/sensors-24-07199-g022.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验