Faculty of Science and Technology, Norwegian University of Life Sciences, 1430 Ås, Norway.
SINTEF Digital, Forskningsveien 1, 0373 Oslo, Norway.
Sensors (Basel). 2020 Sep 14;20(18):5249. doi: 10.3390/s20185249.
Automated robotic platforms are an important part of precision agriculture solutions for sustainable food production. Agri-robots require robust and accurate guidance systems in order to navigate between crops and to and from their base station. Onboard sensors such as machine vision cameras offer a flexible guidance alternative to more expensive solutions for structured environments such as scanning lidar or RTK-GNSS. The main challenges for visual crop row guidance are the dramatic differences in appearance of crops between farms and throughout the season and the variations in crop spacing and contours of the crop rows. Here we present a visual guidance pipeline for an agri-robot operating in strawberry fields in Norway that is based on semantic segmentation with a convolution neural network (CNN) to segment input RGB images into crop and not-crop (i.e., drivable terrain) regions. To handle the uneven contours of crop rows in Norway's hilly agricultural regions, we develop a new adaptive multi-ROI method for fitting trajectories to the drivable regions. We test our approach in open-loop trials with a real agri-robot operating in the field and show that our approach compares favourably to other traditional guidance approaches.
自动化机器人平台是可持续粮食生产的精准农业解决方案的重要组成部分。农业机器人需要强大而精确的导航系统,以便在作物之间以及往返其基站进行导航。机器视觉相机等机载传感器为视觉作物行导航提供了一种灵活的替代方案,而对于扫描激光雷达或 RTK-GNSS 等结构化环境,这种替代方案更加昂贵。视觉作物行导航的主要挑战是不同农场和整个季节作物外观的巨大差异,以及作物间距和作物行轮廓的变化。在这里,我们为在挪威草莓田中作业的农业机器人展示了一个视觉导航管道,该管道基于语义分割和卷积神经网络 (CNN),将输入的 RGB 图像分割为作物和非作物(即可行驶地形)区域。为了处理挪威丘陵农业地区作物行的不均匀轮廓,我们开发了一种新的自适应多 ROI 方法,用于将轨迹拟合到可行驶区域。我们在现场作业的真实农业机器人的开环试验中测试了我们的方法,并表明我们的方法与其他传统导航方法相比具有优势。