Suppr超能文献

基于改进 YOLOv3 的海面目标视觉检测算法研究

Study on Visual Detection Algorithm of Sea Surface Targets Based on Improved YOLOv3.

机构信息

College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China.

College of Shipbuilding Engineering, Harbin Engineering University, Harbin 150001, China.

出版信息

Sensors (Basel). 2020 Dec 18;20(24):7263. doi: 10.3390/s20247263.

Abstract

Countries around the world have paid increasing attention to the issue of marine security, and sea target detection is a key task to ensure marine safety. Therefore, it is of great significance to propose an efficient and accurate sea-surface target detection algorithm. The anchor-setting method of the traditional YOLO v3 only uses the degree of overlap between the anchor and the ground-truth box as the standard. As a result, the information of some feature maps cannot be used, and the required accuracy of target detection is hard to achieve in a complex sea environment. Therefore, two new anchor-setting methods for the visual detection of sea targets were proposed in this paper: the average method and the select-all method. In addition, cross PANet, a feature fusion structure for cross-feature maps was developed and was used to obtain a better baseline cross YOLO v3, where different anchor-setting methods were combined with a focal loss for experimental comparison in the datasets of sea buoys and existing sea ships, SeaBuoys and SeaShips, respectively. The results showed that the method proposed in this paper could significantly improve the accuracy of YOLO v3 in detecting sea-surface targets, and the highest value of mAP in the two datasets is 98.37% and 90.58%, respectively.

摘要

世界各国越来越重视海洋安全问题,而海面目标检测是确保海洋安全的关键任务。因此,提出一种高效准确的海面目标检测算法具有重要意义。传统 YOLO v3 的锚定方法仅使用锚与真实框之间的重叠程度作为标准。因此,一些特征图的信息无法使用,在复杂的海洋环境中难以达到目标检测所需的精度。因此,本文提出了两种用于海面目标视觉检测的新锚定方法:平均法和全选法。此外,还开发了用于跨特征图的特征融合结构交叉 PANet,以获得更好的基线交叉 YOLO v3,并在海面浮标数据集和现有海船数据集 SeaBuoys 和 SeaShips 中,分别结合焦点损失对不同的锚定方法进行实验比较。结果表明,本文提出的方法可以显著提高 YOLO v3 检测海面目标的准确性,在两个数据集上的 mAP 值最高分别达到 98.37%和 90.58%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/409f/7766418/2070d1f3e14a/sensors-20-07263-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验