Suppr超能文献

利用合成图像训练棉花杂草检测和生物量估计的深度学习模型。

Use of synthetic images for training a deep learning model for weed detection and biomass estimation in cotton.

机构信息

Department of Soil and Crop Sciences, Texas A&M University, College Station, TX, 77843, USA.

Department of Ecosystem Science and Management, Texas A&M University, College Station, TX, 77843, USA.

出版信息

Sci Rep. 2022 Nov 15;12(1):19580. doi: 10.1038/s41598-022-23399-z.

Abstract

Site-specific treatment of weeds in agricultural landscapes has been gaining importance in recent years due to economic savings and minimal impact on the environment. Different detection methods have been developed and tested for precision weed management systems, but recent developments in neural networks have offered great prospects. However, a major limitation with the neural network models is the requirement of high volumes of data for training. The current study aims at exploring an alternative approach to the use of real images to address this issue. In this study, synthetic images were generated with various strategies using plant instances clipped from UAV-borne real images. In addition, the Generative Adversarial Networks (GAN) technique was used to generate fake plant instances which were used in generating synthetic images. These images were used to train a powerful convolutional neural network (CNN) known as "Mask R-CNN" for weed detection and segmentation in a transfer learning mode. The study was conducted on morningglories (MG) and grass weeds (Grass) infested in cotton. The biomass for individual weeds was also collected in the field for biomass modeling using detection and segmentation results derived from model inference. Results showed a comparable performance between the real plant-based synthetic image (mean average precision for mask-mAP: 0.60; mean average precision for bounding box-mAP: 0.64) and real image datasets (mAP: 0.80; mAP: 0.81). However, the mixed dataset (real image  + real plant instance-based synthetic image dataset) resulted in no performance gain for segmentation mask whereas a very small performance gain for bounding box (mAP: 0.80; mAP: 0.83). Around 40-50 plant instances were sufficient for generating synthetic images that resulted in optimal performance. Row orientation of cotton in the synthetic images was beneficial compared to random-orientation. Synthetic images generated with automatically-clipped plant instances performed similarly to the ones generated with manually-clipped instances. Generative Adversarial Networks-derived fake plant instances-based synthetic images did not perform as effectively as real plant instance-based synthetic images. The canopy mask area predicted weed biomass better than bounding box area with R values of 0.66 and 0.46 for MG and Grass, respectively. The findings of this study offer valuable insights for guiding future endeavors oriented towards using synthetic images for weed detection and segmentation, and biomass estimation in row crops.

摘要

近年来,由于经济节约和对环境的最小影响,农业景观中针对杂草的特定地点处理变得越来越重要。已经开发并测试了不同的检测方法用于精确杂草管理系统,但神经网络的最新发展提供了广阔的前景。然而,神经网络模型的一个主要限制是需要大量数据进行训练。本研究旨在探索一种替代方法来使用真实图像来解决这个问题。在本研究中,使用从无人机携带的真实图像中裁剪的植物实例,使用各种策略生成了合成图像。此外,还使用生成对抗网络 (GAN) 技术生成了用于生成合成图像的假植物实例。这些图像用于在迁移学习模式下训练强大的卷积神经网络 (CNN),称为“Mask R-CNN”,用于杂草检测和分割。该研究在棉花上的牵牛花 (MG) 和杂草 (Grass) 中进行。还在田间收集了单个杂草的生物量,以便使用从模型推断得出的检测和分割结果进行生物量建模。结果表明,基于真实植物的合成图像 (掩模平均精度 mAP:0.60;边界框平均精度 mAP:0.64) 和真实图像数据集之间的性能相当 (mAP:0.80;mAP:0.81)。然而,混合数据集 (真实图像+基于真实植物实例的合成图像数据集) 对分割掩模没有带来性能提升,而对边界框则带来了非常小的性能提升 (mAP:0.80;mAP:0.83)。生成最佳性能所需的合成图像大约需要 40-50 个植物实例。与随机定向相比,棉花在合成图像中的行方向更有益。自动裁剪植物实例生成的合成图像与手动裁剪实例生成的合成图像性能相似。基于生成对抗网络生成的假植物实例的合成图像的性能不如基于真实植物实例的合成图像。与边界框区域相比,预测杂草生物量的冠层掩模区域更好,牵牛花和杂草的 R 值分别为 0.66 和 0.46。本研究的结果为指导未来使用合成图像进行杂草检测和分割以及行作物生物量估计的努力提供了有价值的见解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c43/9666527/255cf98b2333/41598_2022_23399_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验