Genze Nikita, Wirth Maximilian, Schreiner Christian, Ajekwe Raymond, Grieb Michael, Grimm Dominik G
Technical University of Munich, TUM Campus Straubing for Biotechnology and Sustainability, Bioinformatics, Schulgasse 22, 94315, Straubing, Germany.
Weihenstephan-Triesdorf University of Applied Sciences, Bioinformatics, Petersgasse 18, 94315, Straubing, Germany.
Plant Methods. 2023 Aug 22;19(1):87. doi: 10.1186/s13007-023-01060-8.
Efficient and site-specific weed management is a critical step in many agricultural tasks. Image captures from drones and modern machine learning based computer vision methods can be used to assess weed infestation in agricultural fields more efficiently. However, the image quality of the captures can be affected by several factors, including motion blur. Image captures can be blurred because the drone moves during the image capturing process, e.g. due to wind pressure or camera settings. These influences complicate the annotation of training and test samples and can also lead to reduced predictive power in segmentation and classification tasks.
In this study, we propose DeBlurWeedSeg, a combined deblurring and segmentation model for weed and crop segmentation in motion blurred images. For this purpose, we first collected a new dataset of matching sharp and naturally blurred image pairs of real sorghum and weed plants from drone images of the same agricultural field. The data was used to train and evaluate the performance of DeBlurWeedSeg on both sharp and blurred images of a hold-out test-set. We show that DeBlurWeedSeg outperforms a standard segmentation model that does not include an integrated deblurring step, with a relative improvement of [Formula: see text] in terms of the Sørensen-Dice coefficient.
Our combined deblurring and segmentation model DeBlurWeedSeg is able to accurately segment weeds from sorghum and background, in both sharp as well as motion blurred drone captures. This has high practical implications, as lower error rates in weed and crop segmentation could lead to better weed control, e.g. when using robots for mechanical weed removal.
高效且针对特定位置的杂草管理是许多农业任务中的关键步骤。无人机拍摄的图像以及基于现代机器学习的计算机视觉方法可用于更高效地评估农田中的杂草侵染情况。然而,拍摄图像的质量可能会受到多种因素的影响,包括运动模糊。图像拍摄可能会模糊,因为无人机在图像拍摄过程中移动,例如由于风压或相机设置。这些影响使得训练和测试样本的标注变得复杂,并且还可能导致分割和分类任务中的预测能力下降。
在本研究中,我们提出了DeBlurWeedSeg,这是一种用于运动模糊图像中杂草和作物分割的去模糊与分割相结合的模型。为此,我们首先从同一农田的无人机图像中收集了一个新的数据集,该数据集包含真实高粱和杂草植物清晰与自然模糊图像对的匹配样本。这些数据用于训练和评估DeBlurWeedSeg在保留测试集的清晰和模糊图像上的性能。我们表明,DeBlurWeedSeg优于不包括集成去模糊步骤的标准分割模型,在 Sørensen-Dice 系数方面相对提高了[公式:见原文]。
我们的去模糊与分割相结合的模型DeBlurWeedSeg能够在清晰以及运动模糊的无人机拍摄图像中准确地将杂草与高粱和背景分割开。这具有很高的实际意义,因为杂草和作物分割中较低的错误率可能会带来更好的杂草控制效果,例如在使用机器人进行机械除草时。