DTIS, ONERA, Université Paris Saclay, F-91123 Palaiseau, France.
LIRMM, University of Montpellier, CNRS, 34080 Montpellier, France.
Sensors (Basel). 2019 Feb 8;19(3):687. doi: 10.3390/s19030687.
In the context of underwater robotics, the visual degradation induced by the medium properties make difficult the exclusive use of cameras for localization purpose. Hence, many underwater localization methods are based on expensive navigation sensors associated with acoustic positioning. On the other hand, pure visual localization methods have shown great potential in underwater localization but the challenging conditions, such as the presence of turbidity and dynamism, remain complex to tackle. In this paper, we propose a new visual odometry method designed to be robust to these visual perturbations. The proposed algorithm has been assessed on both simulated and real underwater datasets and outperforms state-of-the-art terrestrial visual SLAM methods under many of the most challenging conditions. The main application of this work is the localization of Remotely Operated Vehicles used for underwater archaeological missions, but the developed system can be used in any other applications as long as visual information is available.
在水下机器人领域,由于介质特性导致的视觉退化使得仅使用摄像机进行定位变得困难。因此,许多水下定位方法都基于昂贵的导航传感器和声学定位。另一方面,纯视觉定位方法在水下定位中显示出了巨大的潜力,但挑战仍然存在,例如浑浊度和动态性等复杂情况。在本文中,我们提出了一种新的视觉里程计方法,旨在对这些视觉干扰具有鲁棒性。该算法已经在模拟和真实水下数据集上进行了评估,并且在许多最具挑战性的条件下都优于最先进的地面视觉 SLAM 方法。这项工作的主要应用是用于水下考古任务的遥控水下机器人的定位,但只要有视觉信息可用,开发的系统就可以用于任何其他应用。