• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在自动驾驶中通过基于YOLO的潜在动态物体去除增强激光雷达测绘

Enhancing LiDAR Mapping with YOLO-Based Potential Dynamic Object Removal in Autonomous Driving.

作者信息

Jeong Seonghark, Shin Heeseok, Kim Myeong-Jun, Kang Dongwan, Lee Seangwock, Oh Sangki

机构信息

Propulsion Division, GM Korea Company, Incheon 21344, Republic of Korea.

Convergence Major for Intelligent Drone, Sejong University, Seoul 05006, Republic of Korea.

出版信息

Sensors (Basel). 2024 Nov 27;24(23):7578. doi: 10.3390/s24237578.

DOI:10.3390/s24237578
PMID:39686115
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11644668/
Abstract

In this study, we propose an enhanced LiDAR-based mapping and localization system that utilizes a camera-based YOLO (You Only Look Once) algorithm to detect and remove dynamic objects, such as vehicles, from the mapping process. GPS, while commonly used for localization, often fails in urban environments due to signal blockages. To address this limitation, our system integrates YOLOv4 with LiDAR, enabling the removal of dynamic objects to improve map accuracy and localization in high-traffic areas. Existing methods using LiDAR segmentation for map matching often suffer from missed detections and false positives, degrading performance. Our approach leverages YOLOv4's robust object detection capabilities to eliminate potentially dynamic objects while retaining static environmental features, such as buildings, to enhance map accuracy and reliability. The proposed system was validated using a mid-size SUV equipped with LiDAR and camera sensors. The experimental results demonstrate significant improvements in map-matching and localization performance, particularly in urban environments. The system achieved RMSE (Root Mean Square Error) reductions compared to conventional methods, with RMSE values decreasing from 0.9870 to 0.9724 in open areas and from 1.3874 to 1.1217 in urban areas. These findings highlight the ability of the Vision + LiDAR + NDT method to enhance localization performance in both simple and complex environments. By addressing the challenges of dynamic obstacles, the proposed system effectively improves the accuracy and robustness of autonomous navigation in high-traffic settings without relying on GPS.

摘要

在本研究中,我们提出了一种增强型基于激光雷达的建图与定位系统,该系统利用基于摄像头的YOLO(You Only Look Once)算法在建图过程中检测并去除动态物体,如车辆。全球定位系统(GPS)虽常用于定位,但在城市环境中常因信号受阻而失效。为解决这一局限性,我们的系统将YOLOv4与激光雷达集成,能够去除动态物体,以提高高交通流量区域的地图精度和定位。现有的使用激光雷达分割进行地图匹配的方法往往存在漏检和误报问题,导致性能下降。我们的方法利用YOLOv4强大的目标检测能力,在保留建筑物等静态环境特征的同时消除潜在的动态物体,以提高地图精度和可靠性。所提出的系统使用配备了激光雷达和摄像头传感器的中型运动型多用途汽车(SUV)进行了验证。实验结果表明,在地图匹配和定位性能方面有显著提升,尤其是在城市环境中。与传统方法相比,该系统实现了均方根误差(RMSE)的降低,在开阔区域RMSE值从0.9870降至0.9724,在城市区域从1.3874降至1.1217。这些发现凸显了视觉+激光雷达+正态分布变换(NDT)方法在简单和复杂环境中增强定位性能的能力。通过应对动态障碍物的挑战,所提出的系统在不依赖GPS的情况下有效提高了高交通流量场景下自主导航的准确性和鲁棒性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/44db413b0537/sensors-24-07578-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/6eda255ef8f2/sensors-24-07578-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/63b28c6763a6/sensors-24-07578-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/485b88d9ad32/sensors-24-07578-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/ee57173da06e/sensors-24-07578-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/6a4a4287f875/sensors-24-07578-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/e39378ea0c3d/sensors-24-07578-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/9d5d90521d90/sensors-24-07578-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/44db413b0537/sensors-24-07578-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/6eda255ef8f2/sensors-24-07578-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/63b28c6763a6/sensors-24-07578-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/485b88d9ad32/sensors-24-07578-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/ee57173da06e/sensors-24-07578-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/6a4a4287f875/sensors-24-07578-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/e39378ea0c3d/sensors-24-07578-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/9d5d90521d90/sensors-24-07578-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b6e/11644668/44db413b0537/sensors-24-07578-g008.jpg

相似文献

1
Enhancing LiDAR Mapping with YOLO-Based Potential Dynamic Object Removal in Autonomous Driving.在自动驾驶中通过基于YOLO的潜在动态物体去除增强激光雷达测绘
Sensors (Basel). 2024 Nov 27;24(23):7578. doi: 10.3390/s24237578.
2
Performance Analysis of NDT-based Graph SLAM for Autonomous Vehicle in Diverse Typical Driving Scenarios of Hong Kong.基于无损检测的图 SLAM 在香港多种典型驾驶场景下的自动驾驶汽车性能分析。
Sensors (Basel). 2018 Nov 14;18(11):3928. doi: 10.3390/s18113928.
3
Development of a GPU-Accelerated NDT Localization Algorithm for GNSS-Denied Urban Areas.一种用于全球导航卫星系统(GNSS)信号受阻的城市区域的GPU加速无损检测定位算法的开发。
Sensors (Basel). 2022 Mar 1;22(5):1913. doi: 10.3390/s22051913.
4
LiDAR-Based Sensor Fusion SLAM and Localization for Autonomous Driving Vehicles in Complex Scenarios.基于激光雷达的传感器融合SLAM技术及复杂场景下自动驾驶车辆的定位
J Imaging. 2023 Feb 20;9(2):52. doi: 10.3390/jimaging9020052.
5
Pole-Like Object Extraction and Pole-Aided GNSS/IMU/LiDAR-SLAM System in Urban Area.城区中杆状物体提取及基于杆辅助的GNSS/IMU/LiDAR-SLAM系统
Sensors (Basel). 2020 Dec 13;20(24):7145. doi: 10.3390/s20247145.
6
Advanced Monocular Outdoor Pose Estimation in Autonomous Systems: Leveraging Optical Flow, Depth Estimation, and Semantic Segmentation with Dynamic Object Removal.自主系统中的高级单目户外姿态估计:利用光流、深度估计和语义分割去除动态物体
Sensors (Basel). 2024 Dec 17;24(24):8040. doi: 10.3390/s24248040.
7
Towards a Meaningful 3D Map Using a 3D Lidar and a Camera.使用三维激光雷达和相机生成有意义的三维地图。
Sensors (Basel). 2018 Aug 6;18(8):2571. doi: 10.3390/s18082571.
8
LiDAR Inertial Odometry Based on Indexed Point and Delayed Removal Strategy in Highly Dynamic Environments.基于索引点和延迟移除策略的 LiDAR 惯性里程计在高度动态环境下的应用。
Sensors (Basel). 2023 May 30;23(11):5188. doi: 10.3390/s23115188.
9
Semantic visual simultaneous localization and mapping (SLAM) using deep learning for dynamic scenes.使用深度学习的语义视觉同步定位与地图构建(SLAM)用于动态场景。
PeerJ Comput Sci. 2023 Oct 10;9:e1628. doi: 10.7717/peerj-cs.1628. eCollection 2023.
10
Building the Future of Transportation: A Comprehensive Survey on AV Perception, Localization, and Mapping.构建交通的未来:关于自动驾驶汽车感知、定位和地图绘制的全面综述。
Sensors (Basel). 2025 Mar 23;25(7):2004. doi: 10.3390/s25072004.

本文引用的文献

1
SLAMICP Library: Accelerating Obstacle Detection in Mobile Robot Navigation via Outlier Monitoring following ICP Localization.SLAMICP库:通过ICP定位后的异常值监测加速移动机器人导航中的障碍物检测
Sensors (Basel). 2023 Aug 1;23(15):6841. doi: 10.3390/s23156841.
2
Deep Learning for 3D Point Clouds: A Survey.用于三维点云的深度学习:综述
IEEE Trans Pattern Anal Mach Intell. 2021 Dec;43(12):4338-4364. doi: 10.1109/TPAMI.2020.3005434. Epub 2021 Nov 3.
3
A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping.
基于视觉激光雷达融合的同时定位与建图综述
Sensors (Basel). 2020 Apr 7;20(7):2068. doi: 10.3390/s20072068.
4
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.