Park Jungme, Thota Bharath Kumar, Somashekar Karthik
College of Engineering, Kettering University, Flint, MI 48504, USA.
Sensors (Basel). 2024 Jul 22;24(14):4755. doi: 10.3390/s24144755.
Ensuring a safe nighttime environmental perception system relies on the early detection of vulnerable road users with minimal delay and high precision. This paper presents a sensor-fused nighttime environmental perception system by integrating data from thermal and RGB cameras. A new alignment algorithm is proposed to fuse the data from the two camera sensors. The proposed alignment procedure is crucial for effective sensor fusion. To develop a robust Deep Neural Network (DNN) system, nighttime thermal and RGB images were collected under various scenarios, creating a labeled dataset of 32,000 image pairs. Three fusion techniques were explored using transfer learning, alongside two single-sensor models using only RGB or thermal data. Five DNN models were developed and evaluated, with experimental results showing superior performance of fused models over non-fusion counterparts. The late-fusion system was selected for its optimal balance of accuracy and response time. For real-time inferencing, the best model was further optimized, achieving 33 fps on the embedded edge computing device, an 83.33% improvement in inference speed over the system without optimization. These findings are valuable for advancing Advanced Driver Assistance Systems (ADASs) and autonomous vehicle technologies, enhancing pedestrian detection during nighttime to improve road safety and reduce accidents.
确保安全的夜间环境感知系统依赖于以最小的延迟和高精度对易受伤害的道路使用者进行早期检测。本文提出了一种通过整合来自热成像和RGB摄像头的数据的传感器融合夜间环境感知系统。提出了一种新的校准算法来融合来自两个摄像头传感器的数据。所提出的校准过程对于有效的传感器融合至关重要。为了开发一个强大的深度神经网络(DNN)系统,在各种场景下收集了夜间热成像和RGB图像,创建了一个包含32000对图像的标记数据集。使用迁移学习探索了三种融合技术,以及两个仅使用RGB或热数据的单传感器模型。开发并评估了五个DNN模型,实验结果表明融合模型比非融合模型具有更优的性能。后期融合系统因其在准确性和响应时间方面的最佳平衡而被选中。为了进行实时推理,对最佳模型进行了进一步优化,在嵌入式边缘计算设备上实现了33帧每秒的速度,推理速度比未优化的系统提高了83.33%。这些发现对于推进高级驾驶辅助系统(ADAS)和自动驾驶车辆技术、增强夜间行人检测以提高道路安全和减少事故具有重要价值。