Dufek Jan, Murphy Robin
Department of Computer Science and Engineering, Texas A&M University, College Station, TX, United States.
Front Robot AI. 2019 May 31;6:42. doi: 10.3389/frobt.2019.00042. eCollection 2019.
This article addresses the problem of how to visually estimate the pose of a rescue unmanned surface vehicle (USV) using an unmanned aerial system (UAS) in marine mass casualty events. A UAS visually navigating the USV can help solve problems with teleoperation and manpower requirements. The solution has to estimate full pose (both position and orientation) and has to work in an outdoor environment from oblique view angle (up to 85° from nadir) at large distances (180 m) in real-time (5 Hz) and assume both moving UAS (up to 22 m s) and moving object (up to 10 m s). None of the 58 reviewed studies satisfied all those requirements. This article presents two algorithms for visual position estimation using the object's hue (thresholding and histogramming) and four techniques for visual orientation estimation using the object's shape while satisfying those requirements. Four physical experiments were performed to validate the feasibility and compare the thresholding and histogramming algorithms. The histogramming had statistically significantly lower position estimation error compared to thresholding for all four trials (p-value ranged from ~0 to 8.23263 × 10), but it only had statistically significantly lower orientation estimation error for two of the trials (-values 3.51852 × 10 and 1.32762 × 10). The mean position estimation error ranged from 7 to 43 px while the mean orientation estimation error ranged from 0.134 to 0.480 rad. The histogramming algorithm demonstrated feasibility for variations in environmental conditions and physical settings while requiring fewer parameters than thresholding. However, three problems were identified. The orientation estimation error was quite large for both algorithms, both algorithms required manual tuning before each trial, and both algorithms were not robust enough to recover from significant changes in illumination conditions. To reduce the orientation estimation error, inverse perspective warping will be necessary to reduce the perspective distortion. To eliminate the necessity for tuning and increase the robustness, a machine learning approach to pose estimation might ultimately be a better solution.
本文探讨了在海上大规模伤亡事件中,如何利用无人机系统(UAS)以视觉方式估计救援无人水面舰艇(USV)姿态的问题。通过视觉导航USV的无人机系统有助于解决远程操作和人力需求方面的问题。该解决方案必须估计完整姿态(位置和方向),并且必须在户外环境中,从倾斜视角(相对于天底最大85°)、远距离(180米)实时(5赫兹)工作,同时假定无人机(速度高达22米/秒)和目标物体(速度高达10米/秒)均处于移动状态。在查阅的58项研究中,没有一项能满足所有这些要求。本文提出了两种利用物体色调进行视觉位置估计的算法(阈值处理和直方图法),以及四种利用物体形状进行视觉方向估计的技术,同时满足了这些要求。进行了四项物理实验来验证可行性,并比较阈值处理算法和直方图算法。在所有四项试验中,直方图法的位置估计误差在统计学上显著低于阈值处理法(p值范围约为0至8.23263×10),但只有两项试验的方向估计误差在统计学上显著更低(p值分别为3.51852×10和1.32762×10)。平均位置估计误差范围为7至43像素,而平均方向估计误差范围为0.134至0.480弧度。直方图算法在环境条件和物理设置变化时展示了可行性,同时所需参数比阈值处理法更少。然而,发现了三个问题。两种算法的方向估计误差都相当大,两种算法在每次试验前都需要手动调整,并且两种算法在光照条件发生显著变化时恢复能力都不够强。为了降低方向估计误差,需要进行逆透视变形以减少透视失真。为了消除调整的必要性并提高鲁棒性,采用机器学习方法进行姿态估计最终可能是更好的解决方案。