Leite Douglas Vieira, de Brito Alisson Vasconcelos, Faccioli Gregorio Guirada, Haddad Souza Vieira Gustavo
EMEC, Sergipe Educational, Technology and Scientific Institute, Lourival Batista, Lourival Batista Highway s/n, Lagarto 49400-000, Sergipe, Brazil.
Laboratory of Embedded Systems and Robotics, Paraíba Federal University, Campus I Lot. Cidade Universitaria, João Pessoa 58051-900, Paraıba, Brazil.
Plants (Basel). 2025 Jul 1;14(13):2011. doi: 10.3390/plants14132011.
The accurate assessment of plant disease severity is crucial for effective crop management. Deep learning, especially via CNNs, is widely used for image segmentation in plant lesion detection, but accurately assessing disease severity across varied environmental conditions remains challenging. This study evaluates eight deep learning models for detecting and quantifying Cercospora leaf spot () severity in chili peppers under natural field conditions. A custom dataset of 1645 chili pepper leaf images, collected from a Brazilian plantation and annotated with 6282 lesions, was developed for real-world robustness, reflecting real-world variability in lighting and background. First, an algorithm was developed to process raw images, applying ROI selection and background removal. Then, four YOLOv8 and four Mask R-CNN models were fine-tuned for pixel-level segmentation and severity classification, comparing one-stage and two-stage models to offer practical insights for agricultural applications. In pixel-level segmentation on the test dataset, Mask R-CNN achieved superior precision with a Mean Intersection over Union (MIoU) of 0.860 and F1-score of 0.924 for the mask_rcnn_R101_FPN_3x model, compared to 0.808 and 0.893 for the YOLOv8s-Seg model. However, in severity classification, Mask R-CNN underestimated higher severity levels, with an accuracy of 72.3% for level III, while YOLOv8 attained 91.4%. Additionally, YOLOv8 demonstrated greater efficiency, with an inference time of 27 ms versus 89 ms for Mask R-CNN. While Mask R-CNN excels in segmentation accuracy, YOLOv8 offers a compelling balance of speed and reliable severity classification, making it suitable for real-time plant disease assessment in agricultural applications.
准确评估植物病害严重程度对于有效的作物管理至关重要。深度学习,特别是通过卷积神经网络(CNNs),在植物病斑检测的图像分割中被广泛应用,但在不同环境条件下准确评估病害严重程度仍然具有挑战性。本研究评估了八种深度学习模型,用于在自然田间条件下检测和量化辣椒上的尾孢叶斑病严重程度。为了体现现实世界中光照和背景的变化,开发了一个包含1645张辣椒叶片图像的自定义数据集,这些图像从巴西种植园收集,并标注了6282个病斑。首先,开发了一种算法来处理原始图像,应用感兴趣区域(ROI)选择和背景去除。然后,对四个YOLOv8模型和四个Mask R-CNN模型进行微调,用于像素级分割和严重程度分类,比较单阶段和两阶段模型,为农业应用提供实用见解。在测试数据集的像素级分割中,对于mask_rcnn_R101_FPN_3x模型,Mask R-CNN实现了更高的精度,平均交并比(MIoU)为0.860,F1分数为0.924,而YOLOv8s-Seg模型的MIoU为0.808,F1分数为0.893。然而,在严重程度分类方面,Mask R-CNN低估了较高的严重程度水平,III级的准确率为72.3%,而YOLOv8达到了91.4%。此外,YOLOv8展示了更高的效率,推理时间为27毫秒,而Mask R-CNN为89毫秒。虽然Mask R-CNN在分割精度方面表现出色,但YOLOv8在速度和可靠的严重程度分类之间提供了令人信服的平衡,使其适用于农业应用中的实时植物病害评估。