School of Electrical and Electronic Engineering, Hubei University of Technology, Wuhan, China.
Xiangyang Industrial Institute of Hubei University of Technology, Wuhan, China.
PLoS One. 2022 Apr 21;17(4):e0265503. doi: 10.1371/journal.pone.0265503. eCollection 2022.
The object detection of remote sensing image often has low accuracy and high missed or false detection rate due to the large number of small objects, instance level noise and cloud occlusion. In this paper, a new object detection model based on SRGAN and YOLOV3 is proposed, which is called SR-YOLO. It solves the problems of SRGAN network sensitivity to hyper-parameters and modal collapse. Meanwhile, The FPN network in YOLOv3 is replaced by PANet, shortened the distance between the lowest and the highest layers, and the SR-YOLO model has strong robustness and high detection ability by using the enhanced path to enrich the characteristics of each layer. The experimental results on the UCAS-High Resolution Aerial Object Detection Dataset showed SR-YOLO has achieved excellent performance. Compared with YOLOv3, the average precision (AP) of SR-YOLO increased from 92.35% to 96.13%, the log-average miss rate (MR-2) decreased from 22% to 14%, and the Recall rate increased from 91.36% to 95.12%.
遥感图像的目标检测由于小目标数量多、实例级噪声和云遮挡等原因,往往存在准确率低、漏检或误检率高的问题。本文提出了一种基于 SRGAN 和 YOLOV3 的新的目标检测模型,称为 SR-YOLO。它解决了 SRGAN 网络对超参数敏感和模态崩溃的问题。同时,用 PANet 替换了 YOLOv3 中的 FPN 网络,缩短了最低层和最高层之间的距离,通过使用增强路径丰富各层的特征,使 SR-YOLO 模型具有较强的鲁棒性和较高的检测能力。在 UCAS-High Resolution Aerial Object Detection Dataset 上的实验结果表明,SR-YOLO 取得了优异的性能。与 YOLOv3 相比,SR-YOLO 的平均精度(AP)从 92.35%提高到 96.13%,对数平均漏检率(MR-2)从 22%降低到 14%,召回率从 91.36%提高到 95.12%。