Hariya Keigo, Inoshita Hiroki, Yanase Ryo, Yoneda Keisuke, Suganuma Naoki
Graduate School of Natural Science and Technology, Kanazawa University, Kanazawa 920-1192, Japan.
Advanced Mobility Research Institute, Kanazawa University, Kanazawa 920-1192, Japan.
Sensors (Basel). 2023 Oct 10;23(20):8367. doi: 10.3390/s23208367.
Recognition of surrounding objects is crucial for ensuring the safety of automated driving systems. In the realm of 3D object recognition through deep learning, several methods incorporate the fusion of Light Detection and Ranging (LiDAR) and camera data. The effectiveness of the LiDAR-camera fusion approach is widely acknowledged due to its ability to provide a richer source of information for object detection compared to methods that rely solely on individual sensors. Within the framework of the LiDAR-camera multistage fusion method, challenges arise in maintaining stable object recognition, especially under adverse conditions where object detection in camera images becomes challenging, such as during night-time or in rainy weather. In this research paper, we introduce "ExistenceMap-PointPillars", a novel and effective approach for 3D object detection that leverages information from multiple sensors. This approach involves a straightforward modification of the LiDAR-based 3D object detection network. The core concept of ExistenceMap-PointPillars revolves around the integration of pseudo 2D maps, which depict the estimated object existence regions derived from the fused sensor data in a probabilistic manner. These maps are then incorporated into a pseudo image generated from a 3D point cloud. Our experimental results, based on our proprietary dataset, demonstrate the substantial improvements achieved by ExistenceMap-PointPillars. Specifically, it enhances the mean Average Precision (mAP) by a noteworthy +4.19% compared to the conventional PointPillars method. Additionally, we conducted an evaluation of the network's response using Grad-CAM in conjunction with ExistenceMap-PointPillars, which exhibited a heightened focus on the existence regions of objects within the pseudo 2D map. This focus resulted in a reduction in the number of false positives. In summary, our research presents ExistenceMap-PointPillars as a valuable advancement in the field of 3D object detection, offering improved performance and robustness, especially in challenging environmental conditions.
识别周围物体对于确保自动驾驶系统的安全至关重要。在通过深度学习进行三维物体识别的领域中,有几种方法将激光雷达(LiDAR)和相机数据融合在一起。与仅依赖单个传感器的方法相比,LiDAR-相机融合方法能够为物体检测提供更丰富的信息源,其有效性得到了广泛认可。在LiDAR-相机多阶段融合方法的框架内,维持稳定的物体识别存在挑战,特别是在夜间或雨天等相机图像中的物体检测变得困难的不利条件下。在本研究论文中,我们介绍了“ExistenceMap-PointPillars”,这是一种新颖且有效的三维物体检测方法,它利用了来自多个传感器的信息。这种方法涉及对基于LiDAR的三维物体检测网络进行直接修改。ExistenceMap-PointPillars的核心概念围绕着伪二维地图的整合,这些地图以概率方式描绘了从融合传感器数据中得出的估计物体存在区域。然后将这些地图合并到从三维点云生成的伪图像中。基于我们的专有数据集的实验结果表明,ExistenceMap-PointPillars取得了显著的改进。具体而言,与传统的PointPillars方法相比,它将平均精度均值(mAP)提高了显著的+4.19%。此外,我们结合ExistenceMap-PointPillars使用Grad-CAM对网络的响应进行了评估,结果显示该方法对伪二维地图中物体的存在区域有更高的关注度。这种关注减少了误报的数量。总之,我们的研究表明ExistenceMap-PointPillars是三维物体检测领域的一项有价值的进展,它在性能和鲁棒性方面都有所提升,特别是在具有挑战性的环境条件下。