Suppr超能文献

评估杂草检测模型在相似生产环境中跨不同作物的交叉适用性。

Evaluating Cross-Applicability of Weed Detection Models Across Different Crops in Similar Production Environments.

作者信息

Sapkota Bishwa B, Hu Chengsong, Bagavathiannan Muthukumar V

机构信息

Department of Soil and Crop Sciences, Texas A&M University, College Station, TX, United States.

Department of Biological and Agricultural Engineering, College Station, TX, United States.

出版信息

Front Plant Sci. 2022 Apr 28;13:837726. doi: 10.3389/fpls.2022.837726. eCollection 2022.

Abstract

Convolutional neural networks (CNNs) have revolutionized the weed detection process with tremendous improvements in precision and accuracy. However, training these models is time-consuming and computationally demanding; thus, training weed detection models for every crop-weed environment may not be feasible. It is imperative to evaluate how a CNN-based weed detection model trained for a specific crop may perform in other crops. In this study, a CNN model was trained to detect morningglories and grasses in cotton. Assessments were made to gauge the potential of the very model in detecting the same weed species in soybean and corn under two levels of detection complexity (levels 1 and 2). Two popular object detection frameworks, YOLOv4 and Faster R-CNN, were trained to detect weeds under two schemes: Detect_Weed (detecting at weed/crop level) and Detect_Species (detecting at weed species level). In addition, the main cotton dataset was supplemented with different amounts of non-cotton crop images to see if cross-crop applicability can be improved. Both frameworks achieved reasonably high accuracy levels for the cotton test datasets under both schemes (Average Precision-AP: 0.83-0.88 and Mean Average Precision-mAP: 0.65-0.79). The same models performed differently over other crops under both frameworks (AP: 0.33-0.83 and mAP: 0.40-0.85). In particular, relatively higher accuracies were observed for soybean than for corn, and also for complexity level 1 than for level 2. Significant improvements in cross-crop applicability were further observed when additional corn and soybean images were added to the model training. These findings provide valuable insights into improving global applicability of weed detection models.

摘要

卷积神经网络(CNN)彻底改变了杂草检测过程,在精度和准确性方面有了巨大提升。然而,训练这些模型既耗时又对计算要求很高;因此,针对每种作物 - 杂草环境训练杂草检测模型可能并不可行。评估基于CNN的针对特定作物训练的杂草检测模型在其他作物中的表现势在必行。在本研究中,训练了一个CNN模型来检测棉花中的牵牛花和杂草。在两种检测复杂度水平(1级和2级)下,对该模型在大豆和玉米中检测相同杂草种类的潜力进行了评估。使用两种流行的目标检测框架YOLOv4和Faster R - CNN,在两种方案下训练以检测杂草:Detect_Weed(在杂草/作物水平进行检测)和Detect_Species(在杂草物种水平进行检测)。此外,主要的棉花数据集补充了不同数量的非棉花作物图像,以查看是否可以提高跨作物适用性。在两种方案下,两个框架对棉花测试数据集都达到了相当高的准确率水平(平均精度 - AP:0.83 - 0.88,平均平均精度 - mAP:0.65 - 0.79)。在这两个框架下,相同的模型在其他作物上的表现有所不同(AP:0.33 - 0.83,mAP:0.40 - 0.85)。特别是,观察到大豆的准确率相对高于玉米,并且复杂度1级的准确率高于2级。当向模型训练中添加额外的玉米和大豆图像时,进一步观察到跨作物适用性有显著提高。这些发现为提高杂草检测模型的全球适用性提供了有价值的见解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5be2/9096552/665b665cb2b8/fpls-13-837726-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验