Wang Qing, Yan Ning, Qin Yasen, Zhang Xuedong, Li Xu
College of Information Engineering, Tarim University, Alaer 843300, China.
Key Laboratory of Tarim Oasis Agriculture, Ministry of Education, Tarim University, Alaer 843300, China.
Sensors (Basel). 2025 May 2;25(9):2882. doi: 10.3390/s25092882.
As an important economic crop, tomato is highly susceptible to diseases that, if not promptly managed, can severely impact yield and quality, leading to significant economic losses. Traditional diagnostic methods rely on expert visual inspection, which is not only laborious but also prone to subjective bias. In recent years, object detection algorithms have gained widespread application in tomato disease detection due to their efficiency and accuracy, providing reliable technical support for crop disease identification. In this paper, we propose an improved tomato leaf disease detection method based on the YOLOv10n algorithm, named BED-YOLO. We constructed an image dataset containing four common tomato diseases (early blight, late blight, leaf mold, and septoria leaf spot), with 65% of the images sourced from field collections in natural environments, and the remainder obtained from the publicly available PlantVillage dataset. All images were annotated with bounding boxes, and the class distribution was relatively balanced to ensure the stability of training and the fairness of evaluation. First, we introduced a Deformable Convolutional Network (DCN) to replace the conventional convolution in the YOLOv10n backbone network, enhancing the model's adaptability to overlapping leaves, occlusions, and blurred lesion edges. Second, we incorporated a Bidirectional Feature Pyramid Network (BiFPN) on top of the FPN + PAN structure to optimize feature fusion and improve the extraction of small disease regions, thereby enhancing the detection accuracy for small lesion targets. Lastly, the Efficient Multi-Scale Attention (EMA) mechanism was integrated into the C2f module to enhance feature fusion, effectively focusing on disease regions while reducing background noise and ensuring the integrity of disease features in multi-scale fusion. The experimental results demonstrated that the improved BED-YOLO model achieved significant performance improvements compared to the original model. Precision increased from 85.1% to 87.2%, recall from 86.3% to 89.1%, and mean average precision (mAP) from 87.4% to 91.3%. Therefore, the improved BED-YOLO model demonstrated significant enhancements in detection accuracy, recall ability, and overall robustness. Notably, it exhibited stronger practical applicability, particularly in image testing under natural field conditions, making it highly suitable for intelligent disease monitoring tasks in large-scale agricultural scenarios.
作为一种重要的经济作物,番茄极易受到病害影响,若不及时防治,会严重影响产量和品质,导致重大经济损失。传统的诊断方法依赖专家的目视检查,不仅费力,而且容易出现主观偏差。近年来,目标检测算法因其高效性和准确性在番茄病害检测中得到广泛应用,为作物病害识别提供了可靠的技术支持。在本文中,我们提出了一种基于YOLOv10n算法的改进型番茄叶部病害检测方法,名为BED-YOLO。我们构建了一个包含四种常见番茄病害(早疫病、晚疫病、叶霉病和Septoria叶斑病)的图像数据集,其中65%的图像来自自然环境中的实地采集,其余图像则从公开可用的PlantVillage数据集中获取。所有图像都用边界框进行了标注,并且类别分布相对均衡,以确保训练的稳定性和评估的公平性。首先,我们引入了可变形卷积网络(DCN)来替换YOLOv10n骨干网络中的传统卷积,增强模型对重叠叶片、遮挡和模糊病变边缘的适应性。其次,我们在FPN+PAN结构之上引入了双向特征金字塔网络(BiFPN),以优化特征融合并改善小病害区域的提取,从而提高对小病变目标的检测精度。最后,将高效多尺度注意力(EMA)机制集成到C2f模块中,以增强特征融合,有效聚焦病害区域,同时减少背景噪声,并确保多尺度融合中病害特征的完整性。实验结果表明,改进后的BED-YOLO模型与原始模型相比,性能有显著提升。精度从85.1%提高到87.2%,召回率从86.3%提高到89.1%,平均精度均值(mAP)从87.4%提高到91.3%。因此,改进后的BED-YOLO模型在检测精度、召回能力和整体鲁棒性方面都有显著提升。值得注意的是,它表现出更强的实际适用性,特别是在自然田间条件下的图像测试中,非常适合大规模农业场景中的智能病害监测任务。