Kim Taek-Lim, Park Tae-Hyoung
Department of Control and Robot Engineering, Chungbuk National University, Cheongju 28644, Korea.
Department of Intelligent Systems & Robotics, Chungbuk National University, Cheongju 28644, Korea.
Sensors (Basel). 2022 Sep 21;22(19):7163. doi: 10.3390/s22197163.
Object detection is an important factor in the autonomous driving industry. Object detection for autonomous vehicles requires robust results, because various situations and environments must be considered. A sensor fusion method is used to implement robust object detection. A sensor fusion method using a network should effectively meld two features, otherwise, there is concern that the performance is substantially degraded. To effectively use sensors in autonomous vehicles, data analysis is required. We investigated papers in which the camera and LiDAR data change for effective fusion. We propose a feature switch layer for a sensor fusion network for object detection in cameras and LiDAR. Object detection performance was improved by designing a feature switch layer that can consider its environment during network feature fusion. The feature switch layer extracts and fuses features while considering the environment in which the sensor data changes less than during the learning network. We conducted an evaluation experiment using the Dense Dataset and confirmed that the proposed method improves the object detection performance.
目标检测是自动驾驶行业的一个重要因素。自动驾驶车辆的目标检测需要可靠的结果,因为必须考虑各种情况和环境。一种传感器融合方法被用于实现可靠的目标检测。使用网络的传感器融合方法应有效地融合两种特征,否则,人们担心性能会大幅下降。为了在自动驾驶车辆中有效使用传感器,需要进行数据分析。我们研究了相机和激光雷达数据发生变化以实现有效融合的论文。我们为用于相机和激光雷达目标检测的传感器融合网络提出了一个特征切换层。通过设计一个在网络特征融合期间能够考虑其环境的特征切换层,目标检测性能得到了提高。该特征切换层在考虑传感器数据变化小于学习网络期间的环境时提取并融合特征。我们使用密集数据集进行了评估实验,并证实了所提出的方法提高了目标检测性能。