Huang Xiaochen, Wang Xiaofeng, Teng Qizhi, He Xiaohai, Chen Honggang
College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China.
Key Laboratory of Computer Vision and System, Ministry of Education, Tianjin University of Technology, Tianjin 300384, China.
Sensors (Basel). 2024 Sep 30;24(19):6330. doi: 10.3390/s24196330.
Despite significant advancements in CNN-based object detection technology, adverse weather conditions can disrupt imaging sensors' ability to capture clear images, thereby adversely impacting detection accuracy. Mainstream algorithms for adverse weather object detection enhance detection performance through image restoration methods. Nevertheless, the majority of these approaches are designed for a specific degradation scenario, making it difficult to adapt to diverse weather conditions. To cope with this issue, we put forward a degradation type-aware restoration-assisted object detection network, dubbed DTRDNet. It contains an object detection network with a shared feature encoder (SFE) and object detection decoder, a degradation discrimination image restoration decoder (DDIR), and a degradation category predictor (DCP). In the training phase, we jointly optimize the whole framework on a mixed weather dataset, including degraded images and clean images. Specifically, the degradation type information is incorporated in our DDIR to avoid the interaction between clean images and the restoration module. Furthermore, the DCP makes the SFE possess degradation category awareness ability, enhancing the detector's adaptability to diverse weather conditions and enabling it to furnish requisite environmental information as required. Both the DCP and the DDIR can be removed according to requirement in the inference stage to retain the real-time performance of the detection algorithm. Extensive experiments on clear, hazy, rainy, and snowy images demonstrate that our DTRDNet outperforms advanced object detection algorithms, achieving an average mAP of 79.38% across the four weather test sets.
尽管基于卷积神经网络(CNN)的目标检测技术取得了显著进展,但恶劣天气条件会干扰成像传感器捕捉清晰图像的能力,从而对检测精度产生不利影响。主流的恶劣天气目标检测算法通过图像恢复方法提高检测性能。然而,这些方法大多是针对特定的退化场景设计的,难以适应多样的天气条件。为了解决这个问题,我们提出了一种退化类型感知的恢复辅助目标检测网络,称为DTRDNet。它包含一个带有共享特征编码器(SFE)和目标检测解码器的目标检测网络、一个退化判别图像恢复解码器(DDIR)和一个退化类别预测器(DCP)。在训练阶段,我们在一个混合天气数据集上联合优化整个框架,该数据集包括退化图像和清晰图像。具体来说,退化类型信息被纳入我们的DDIR中,以避免清晰图像与恢复模块之间的相互作用。此外,DCP使SFE具备退化类别感知能力,增强了检测器对多样天气条件的适应性,并使其能够根据需要提供必要的环境信息。在推理阶段,可以根据需要移除DCP和DDIR,以保持检测算法的实时性能。在清晰、模糊、下雨和下雪图像上进行的大量实验表明,我们的DTRDNet优于先进的目标检测算法,在四个天气测试集上实现了79.38%的平均平均精度均值(mAP)。