Luan Tian, Zhou Shixiong, Zhang Guokang, Song Zechun, Wu Jiahui, Pan Weijun
College of Air Traffic Managment, Civil Aviation Flight University of China, Guanghan 618307, China.
Sensors (Basel). 2024 Apr 24;24(9):2710. doi: 10.3390/s24092710.
Target detection technology based on unmanned aerial vehicle (UAV)-derived aerial imagery has been widely applied in the field of forest fire patrol and rescue. However, due to the specificity of UAV platforms, there are still significant issues to be resolved such as severe omission, low detection accuracy, and poor early warning effectiveness. In light of these issues, this paper proposes an improved YOLOX network for the rapid detection of forest fires in images captured by UAVs. Firstly, to enhance the network's feature-extraction capability in complex fire environments, a multi-level-feature-extraction structure, CSP-ML, is designed to improve the algorithm's detection accuracy for small-target fire areas. Additionally, a CBAM attention mechanism is embedded in the neck network to reduce interference caused by background noise and irrelevant information. Secondly, an adaptive-feature-extraction module is introduced in the YOLOX network's feature fusion part to prevent the loss of important feature information during the fusion process, thus enhancing the network's feature-learning capability. Lastly, the CIoU loss function is used to replace the original loss function, to address issues such as excessive optimization of negative samples and poor gradient-descent direction, thereby strengthening the network's effective recognition of positive samples. Experimental results show that the improved YOLOX network has better detection performance, with mAP@50 and mAP@50_95 increasing by 6.4% and 2.17%, respectively, compared to the traditional YOLOX network. In multi-target flame and small-target flame scenarios, the improved YOLO model achieved a mAP of 96.3%, outperforming deep learning algorithms such as FasterRCNN, SSD, and YOLOv5 by 33.5%, 7.7%, and 7%, respectively. It has a lower omission rate and higher detection accuracy, and it is capable of handling small-target detection tasks in complex fire environments. This can provide support for UAV patrol and rescue applications from a high-altitude perspective.
基于无人机航空影像的目标检测技术已在森林火灾巡逻与救援领域得到广泛应用。然而,由于无人机平台的特殊性,仍存在严重漏检、检测精度低和预警效果差等重大问题有待解决。针对这些问题,本文提出一种改进的YOLOX网络,用于快速检测无人机拍摄图像中的森林火灾。首先,为增强网络在复杂火灾环境中的特征提取能力,设计了一种多级特征提取结构CSP-ML,以提高算法对小目标火灾区域的检测精度。此外,在颈部网络中嵌入CBAM注意力机制,以减少背景噪声和无关信息造成的干扰。其次,在YOLOX网络的特征融合部分引入自适应特征提取模块,以防止融合过程中重要特征信息的丢失,从而增强网络的特征学习能力。最后,使用CIoU损失函数替换原损失函数,以解决负样本过度优化和梯度下降方向不佳等问题,从而加强网络对正样本的有效识别。实验结果表明,改进后的YOLOX网络具有更好的检测性能,与传统YOLOX网络相比,mAP@50和mAP@50_95分别提高了6.4%和2.17%。在多目标火焰和小目标火焰场景中,改进后的YOLO模型的mAP达到96.3%,分别比FasterRCNN、SSD和YOLOv5等深度学习算法高出33.5%、7.7%和7%。它具有更低的漏检率和更高的检测精度,能够处理复杂火灾环境中的小目标检测任务。这可以从高空视角为无人机巡逻和救援应用提供支持。