• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度学习的BEVFusion方法融合相机与激光雷达实现的带有语义信息的动态占用栅格地图

Dynamic Occupancy Grid Map with Semantic Information Using Deep Learning-Based BEVFusion Method with Camera and LiDAR Fusion.

作者信息

Jang Harin, Kim Taehyun, Ahn Kyungjae, Jeon Soo, Kang Yeonsik

机构信息

Graduate School of Automotive Engineering, Kookmin University, 77 Jeongneung-ro, Seongbuk-gu, Seoul 02707, Republic of Korea.

Department of Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada.

出版信息

Sensors (Basel). 2024 Apr 29;24(9):2828. doi: 10.3390/s24092828.

DOI:10.3390/s24092828
PMID:38732934
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11086224/
Abstract

In the field of robotics and autonomous driving, dynamic occupancy grid maps (DOGMs) are typically used to represent the position and velocity information of objects. Although three-dimensional light detection and ranging (LiDAR) sensor-based DOGMs have been actively researched, they have limitations, as they cannot classify types of objects. Therefore, in this study, a deep learning-based camera-LiDAR sensor fusion technique is employed as input to DOGMs. Consequently, not only the position and velocity information of objects but also their class information can be updated, expanding the application areas of DOGMs. Moreover, unclassified LiDAR point measurements contribute to the formation of a map of the surrounding environment, improving the reliability of perception by registering objects that were not classified by deep learning. To achieve this, we developed update rules on the basis of the Dempster-Shafer evidence theory, incorporating class information and the uncertainty of objects occupying grid cells. Furthermore, we analyzed the accuracy of the velocity estimation using two update models. One assigns the occupancy probability only to the edges of the oriented bounding box, whereas the other assigns the occupancy probability to the entire area of the box. The performance of the developed perception technique is evaluated using the public nuScenes dataset. The developed DOGM with object class information will help autonomous vehicles to navigate in complex urban driving environments by providing them with rich information, such as the class and velocity of nearby obstacles.

摘要

在机器人技术和自动驾驶领域,动态占用网格地图(DOGM)通常用于表示物体的位置和速度信息。尽管基于三维激光雷达(LiDAR)传感器的DOGM已得到积极研究,但它们存在局限性,因为无法对物体类型进行分类。因此,在本研究中,一种基于深度学习的摄像头 - LiDAR传感器融合技术被用作DOGM的输入。结果,不仅物体的位置和速度信息,而且其类别信息都可以更新,从而扩展了DOGM的应用领域。此外,未分类的LiDAR点测量有助于形成周围环境地图,通过对深度学习未分类的物体进行配准来提高感知的可靠性。为实现这一点,我们基于Dempster - Shafer证据理论开发了更新规则,纳入了类别信息和占据网格单元的物体的不确定性。此外,我们使用两种更新模型分析了速度估计的准确性。一种仅将占用概率分配给定向包围盒的边缘,而另一种将占用概率分配给包围盒的整个区域。使用公开的nuScenes数据集评估所开发感知技术的性能。所开发的带有物体类别信息的DOGM将通过为自动驾驶车辆提供丰富信息(如附近障碍物的类别和速度)来帮助它们在复杂的城市驾驶环境中导航。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/1242b49a6e38/sensors-24-02828-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/2dd4199dad05/sensors-24-02828-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/d89c21b0eb1b/sensors-24-02828-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/81c4f0b3fdce/sensors-24-02828-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/f6aa8ce97ce2/sensors-24-02828-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/b0b7e40841ab/sensors-24-02828-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/e4668dcc159c/sensors-24-02828-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/e08ad239cb51/sensors-24-02828-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/f05bfb5e9cf7/sensors-24-02828-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/85a9d409c2fc/sensors-24-02828-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/d48f9f70e183/sensors-24-02828-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/4abf072ab16b/sensors-24-02828-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/3ffcbd4ceb48/sensors-24-02828-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/a7a59859f532/sensors-24-02828-g013a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/effafddfc211/sensors-24-02828-g014a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/9d396290be2c/sensors-24-02828-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/9291898b6eec/sensors-24-02828-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/1242b49a6e38/sensors-24-02828-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/2dd4199dad05/sensors-24-02828-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/d89c21b0eb1b/sensors-24-02828-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/81c4f0b3fdce/sensors-24-02828-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/f6aa8ce97ce2/sensors-24-02828-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/b0b7e40841ab/sensors-24-02828-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/e4668dcc159c/sensors-24-02828-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/e08ad239cb51/sensors-24-02828-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/f05bfb5e9cf7/sensors-24-02828-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/85a9d409c2fc/sensors-24-02828-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/d48f9f70e183/sensors-24-02828-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/4abf072ab16b/sensors-24-02828-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/3ffcbd4ceb48/sensors-24-02828-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/a7a59859f532/sensors-24-02828-g013a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/effafddfc211/sensors-24-02828-g014a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/9d396290be2c/sensors-24-02828-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/9291898b6eec/sensors-24-02828-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c412/11086224/1242b49a6e38/sensors-24-02828-g017.jpg

相似文献

1
Dynamic Occupancy Grid Map with Semantic Information Using Deep Learning-Based BEVFusion Method with Camera and LiDAR Fusion.基于深度学习的BEVFusion方法融合相机与激光雷达实现的带有语义信息的动态占用栅格地图
Sensors (Basel). 2024 Apr 29;24(9):2828. doi: 10.3390/s24092828.
2
Vehicle Detection on Occupancy Grid Maps: Comparison of Five Detectors Regarding Real-Time Performance.基于占据栅格地图的车辆检测:五种检测器实时性能比较。
Sensors (Basel). 2023 Feb 2;23(3):1613. doi: 10.3390/s23031613.
3
Cloud Update of Tiled Evidential Occupancy Grid Maps for the Multi-Vehicle Mapping.多车建图的平铺证据栅格地图云更新
Sensors (Basel). 2018 Nov 23;18(12):4119. doi: 10.3390/s18124119.
4
Managing Localization Uncertainty to Handle Semantic Lane Information from Geo-Referenced Maps in Evidential Occupancy Grids.管理本地化不确定性以处理证据占用栅格中地理参考地图的语义车道信息。
Sensors (Basel). 2020 Jan 8;20(2):352. doi: 10.3390/s20020352.
5
Occupancy grid mapping in urban environments from a moving on-board stereo-vision system.基于移动车载立体视觉系统的城市环境占用网格地图构建
Sensors (Basel). 2014 Jun 13;14(6):10454-78. doi: 10.3390/s140610454.
6
A Survey on Deep-Learning-Based LiDAR 3D Object Detection for Autonomous Driving.基于深度学习的自动驾驶激光雷达 3D 目标检测研究综述。
Sensors (Basel). 2022 Dec 7;22(24):9577. doi: 10.3390/s22249577.
7
Semantic Point Cloud Mapping of LiDAR Based on Probabilistic Uncertainty Modeling for Autonomous Driving.基于概率不确定性建模的自动驾驶激光雷达语义点云映射
Sensors (Basel). 2020 Oct 19;20(20):5900. doi: 10.3390/s20205900.
8
Semantic Evidential Grid Mapping Using Monocular and Stereo Cameras.使用单目和立体相机的语义证据网格映射
Sensors (Basel). 2021 May 12;21(10):3380. doi: 10.3390/s21103380.
9
A New 3D Object Pose Detection Method Using LIDAR Shape Set.一种使用激光雷达形状集的新型三维物体姿态检测方法。
Sensors (Basel). 2018 Mar 16;18(3):882. doi: 10.3390/s18030882.
10
Evaluation of 3D Vulnerable Objects' Detection Using a Multi-Sensors System for Autonomous Vehicles.基于多传感器系统的自动驾驶车辆三维易损物检测评估。
Sensors (Basel). 2022 Feb 21;22(4):1663. doi: 10.3390/s22041663.

本文引用的文献

1
Semantic Evidential Grid Mapping Using Monocular and Stereo Cameras.使用单目和立体相机的语义证据网格映射
Sensors (Basel). 2021 May 12;21(10):3380. doi: 10.3390/s21103380.