Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
Sensors (Basel). 2018 May 24;18(6):1703. doi: 10.3390/s18061703.
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.
无人机或无人驾驶飞行器的自主着陆是机器人研究领域的一个具有挑战性的问题。以前的研究人员试图通过结合多个传感器,如全球定位系统(GPS)接收器、惯性测量单元和多个摄像系统来解决这个问题。尽管这些方法在着陆过程中成功地估计了无人驾驶飞行器的位置,但需要进行许多校准过程才能达到良好的检测精度。此外,还应该考虑到无人机在没有 GPS 信号的异构区域中运行的情况。为了克服这些问题,我们决定使用我们基于远程标记的跟踪算法,该算法基于单个可见光摄像机传感器,在 GPS 受限制的环境中安全地降落无人机。我们的算法没有使用手工制作的特征,而是包含了一个名为 lightDenseYOLO 的卷积神经网络,该网络从输入图像中提取训练好的特征,通过无人机上的可见光摄像机传感器来预测标记的位置。实验结果表明,我们的方法在精度和处理时间方面都明显优于使用和不使用卷积神经网络的最先进的目标跟踪器。