Choi Woen-Sug, Olson Derek R, Davis Duane, Zhang Mabel, Racson Andy, Bingham Brian, McCarrin Michael, Vogt Carson, Herman Jessica
Naval Postgraduate School, Monterey, CA, United States.
Open Robotics, Mountain View, CA, United States.
Front Robot AI. 2021 Sep 8;8:706646. doi: 10.3389/frobt.2021.706646. eCollection 2021.
One of the key distinguishing aspects of underwater manipulation tasks is the perception challenges of the ocean environment, including turbidity, backscatter, and lighting effects. Consequently, underwater perception often relies on sonar-based measurements to estimate the vehicle's state and surroundings, either standalone or in concert with other sensing modalities, to support the perception necessary to plan and control manipulation tasks. Simulation of the multibeam echosounder, while not a substitute for in-water testing, is a critical capability for developing manipulation strategies in the complex and variable ocean environment. Although several approaches exist in the literature to simulate synthetic sonar images, the methods in the robotics community typically use image processing and video rendering software to comply with real-time execution requirements. In addition to a lack of physics-based interaction model between sound and the scene of interest, several basic properties are absent in these rendered sonar images-notably the coherent imaging system and coherent speckle that cause distortion of the object geometry in the sonar image. To address this deficiency, we present a physics-based multibeam echosounder simulation method to capture these fundamental aspects of sonar perception. A point-based scattering model is implemented to calculate the acoustic interaction between the target and the environment. This is a simplified representation of target scattering but can produce realistic coherent image speckle and the correct point spread function. The results demonstrate that this multibeam echosounder simulator generates qualitatively realistic images with high efficiency to provide the sonar image and the physical time series signal data. This synthetic sonar data is a key enabler for developing, testing, and evaluating autonomous underwater manipulation strategies that use sonar as a component of perception.
水下操作任务的关键区别特征之一是海洋环境带来的感知挑战,包括浊度、后向散射和光照效果。因此,水下感知通常依赖基于声纳的测量来估计水下航行器的状态和周围环境,测量方式既可以是单独使用,也可以与其他传感方式协同使用,以支持规划和控制操作任务所需的感知。多波束回声测深仪的模拟虽然不能替代水中测试,但对于在复杂多变的海洋环境中制定操作策略而言是一项关键能力。尽管文献中存在多种模拟合成声纳图像的方法,但机器人领域的方法通常使用图像处理和视频渲染软件来满足实时执行要求。除了缺乏声音与感兴趣场景之间基于物理的相互作用模型外,这些渲染的声纳图像还缺少几个基本属性,尤其是会导致声纳图像中物体几何形状失真的相干成像系统和相干散斑。为了解决这一不足,我们提出了一种基于物理的多波束回声测深仪模拟方法,以捕捉声纳感知的这些基本方面。我们实现了一个基于点的散射模型来计算目标与环境之间的声学相互作用。这是目标散射的一种简化表示,但可以产生逼真的相干图像散斑和正确的点扩散函数。结果表明,这种多波束回声测深仪模拟器能够高效地生成定性逼真的图像,以提供声纳图像和物理时间序列信号数据。这种合成声纳数据是开发、测试和评估将声纳作为感知组件的自主水下操作策略的关键要素。