Suppr超能文献

在自动驾驶中通过基于YOLO的潜在动态物体去除增强激光雷达测绘

Enhancing LiDAR Mapping with YOLO-Based Potential Dynamic Object Removal in Autonomous Driving.

作者信息

Jeong Seonghark, Shin Heeseok, Kim Myeong-Jun, Kang Dongwan, Lee Seangwock, Oh Sangki

机构信息

Propulsion Division, GM Korea Company, Incheon 21344, Republic of Korea.

Convergence Major for Intelligent Drone, Sejong University, Seoul 05006, Republic of Korea.

出版信息

Sensors (Basel). 2024 Nov 27;24(23):7578. doi: 10.3390/s24237578.

Abstract

In this study, we propose an enhanced LiDAR-based mapping and localization system that utilizes a camera-based YOLO (You Only Look Once) algorithm to detect and remove dynamic objects, such as vehicles, from the mapping process. GPS, while commonly used for localization, often fails in urban environments due to signal blockages. To address this limitation, our system integrates YOLOv4 with LiDAR, enabling the removal of dynamic objects to improve map accuracy and localization in high-traffic areas. Existing methods using LiDAR segmentation for map matching often suffer from missed detections and false positives, degrading performance. Our approach leverages YOLOv4's robust object detection capabilities to eliminate potentially dynamic objects while retaining static environmental features, such as buildings, to enhance map accuracy and reliability. The proposed system was validated using a mid-size SUV equipped with LiDAR and camera sensors. The experimental results demonstrate significant improvements in map-matching and localization performance, particularly in urban environments. The system achieved RMSE (Root Mean Square Error) reductions compared to conventional methods, with RMSE values decreasing from 0.9870 to 0.9724 in open areas and from 1.3874 to 1.1217 in urban areas. These findings highlight the ability of the Vision + LiDAR + NDT method to enhance localization performance in both simple and complex environments. By addressing the challenges of dynamic obstacles, the proposed system effectively improves the accuracy and robustness of autonomous navigation in high-traffic settings without relying on GPS.

摘要

在本研究中,我们提出了一种增强型基于激光雷达的建图与定位系统,该系统利用基于摄像头的YOLO(You Only Look Once)算法在建图过程中检测并去除动态物体,如车辆。全球定位系统(GPS)虽常用于定位,但在城市环境中常因信号受阻而失效。为解决这一局限性,我们的系统将YOLOv4与激光雷达集成,能够去除动态物体,以提高高交通流量区域的地图精度和定位。现有的使用激光雷达分割进行地图匹配的方法往往存在漏检和误报问题,导致性能下降。我们的方法利用YOLOv4强大的目标检测能力,在保留建筑物等静态环境特征的同时消除潜在的动态物体,以提高地图精度和可靠性。所提出的系统使用配备了激光雷达和摄像头传感器的中型运动型多用途汽车(SUV)进行了验证。实验结果表明,在地图匹配和定位性能方面有显著提升,尤其是在城市环境中。与传统方法相比,该系统实现了均方根误差(RMSE)的降低,在开阔区域RMSE值从0.9870降至0.9724,在城市区域从1.3874降至1.1217。这些发现凸显了视觉+激光雷达+正态分布变换(NDT)方法在简单和复杂环境中增强定位性能的能力。通过应对动态障碍物的挑战,所提出的系统在不依赖GPS的情况下有效提高了高交通流量场景下自主导航的准确性和鲁棒性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/6eda255ef8f2/sensors-24-07578-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验