Das Aritra, Pathan Fahad, Jim Jamin Rahman, Ouishy Momotaz Rahman, Kabir Md Mohsin, Mridha M F
Department of Computer Science and Engineering, American International University-Bangladesh, Dhaka-1229, Bangladesh.
Institute of Information and Communication Technology, Shahjalal University of Science and Technology, Sylhet-3114, Bangladesh.
Heliyon. 2025 Feb 11;11(4):e42575. doi: 10.1016/j.heliyon.2025.e42575. eCollection 2025 Feb 28.
Agricultural productivity is essential for global economic development by ensuring food security, boosting incomes and supporting employment. It enhances stability, reduces poverty and promotes sustainable growth, creating a robust foundation for overall economic progress and improved quality of life worldwide. However, crop diseases can significantly affect agricultural output and economic resources. The early detection of these diseases is essential to minimize losses and maximize production. In this study, a novel Deep Learning (DL) model called Explainable Lightweight Tomato Leaf Disease Network (XLTLDisNet) has been proposed. The proposed model has been trained and evaluated using a publicly available PlantVillage tomato leaf disease dataset containing ten classes of tomato leaf diseases including healthy images. By leveraging different data augmentation techniques, the proposed approach achieved an impressive overall accuracy of 97.24%, precision 97.20%, recall 96.70% and F1-score of 97.10%. Additionally, explainable AI techniques such as Gradient-weighted Class Activation Mapping (GRAD-CAM) and Local Interpretable Model-agnostic Explanations (LIME) have been integrated into the model to enhance the explainability and interpretability of the proposed study.
农业生产力对于全球经济发展至关重要,它能确保粮食安全、增加收入并支持就业。它增强稳定性、减少贫困并促进可持续增长,为全球整体经济进步和生活质量提升奠定坚实基础。然而,作物病害会显著影响农业产量和经济资源。尽早发现这些病害对于减少损失和实现产量最大化至关重要。在本研究中,提出了一种名为可解释轻量级番茄叶病网络(XLTLDisNet)的新型深度学习(DL)模型。所提出的模型使用包含十类番茄叶病(包括健康图像)的公开可用植物村番茄叶病数据集进行了训练和评估。通过利用不同的数据增强技术,所提出的方法取得了令人印象深刻的总体准确率97.24%、精确率97.20%、召回率96.70%和F1分数97.10%。此外,诸如梯度加权类激活映射(GRAD-CAM)和局部可解释模型无关解释(LIME)等可解释人工智能技术已被集成到模型中,以增强所提出研究的可解释性和可诠释性。