Department of Industrial Engineering, University of Naples Federico II, Piazzale Tecchio 80, 80125 Naples, Italy.
Sensors (Basel). 2019 Oct 7;19(19):4332. doi: 10.3390/s19194332.
The performance achievable by using Unmanned Aerial Vehicles (UAVs) for a large variety of civil and military applications, as well as the extent of applicable mission scenarios, can significantly benefit from the exploitation of formations of vehicles able to fly in a coordinated manner (swarms). In this respect, visual cameras represent a key instrument to enable coordination by giving each UAV the capability to visually monitor the other members of the formation. Hence, a related technological challenge is the development of robust solutions to detect and track cooperative targets through a sequence of frames. In this framework, this paper proposes an innovative approach to carry out this task based on deep learning. Specifically, the You Only Look Once (YOLO) object detection system is integrated within an original processing architecture in which the machine-vision algorithms are aided by navigation hints available thanks to the cooperative nature of the formation. An experimental flight test campaign, involving formations of two multirotor UAVs, is conducted to collect a database of images suitable to assess the performance of the proposed approach. Results demonstrate high-level accuracy, and robustness against challenging conditions in terms of illumination, background and target-range variability.
使用无人驾驶飞行器 (UAV) 实现各种民用和军事应用的性能,以及适用任务场景的范围,可以通过利用能够协调飞行的飞行器编队(群)显著受益。在这方面,视觉相机是实现协调的关键工具,它使每个 UAV 能够通过视觉监控编队中的其他成员。因此,相关的技术挑战是开发强大的解决方案,通过一系列帧来检测和跟踪合作目标。在这个框架中,本文提出了一种基于深度学习的创新方法来完成这项任务。具体来说,YOLO 目标检测系统被集成到一个原始的处理架构中,其中机器视觉算法得到了编队合作性质提供的导航提示的辅助。进行了一次涉及两个多旋翼 UAV 编队的实验飞行测试活动,以收集适合评估所提出方法性能的图像数据库。结果表明,该方法具有高水平的准确性和对光照、背景和目标范围变化等挑战性条件的鲁棒性。