文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

使用可解释人工智能并通过定性和定量分析对水稻叶部病害检测的深度学习模型进行评估。

Evaluation of deep learning models using explainable AI with qualitative and quantitative analysis for rice leaf disease detection.

作者信息

Kondaveeti Hari Kishan, Simhadri Chinna Gopi

机构信息

School of Computer Science and Engineering, VIT-AP University, Amaravathi, 522237, Andhra Pradesh, India.

Department of Computer Science and Engineering, Vignan's Foundation for Science, Technology, and Research, Guntur, 522213, Andhra Pradesh, India.

出版信息

Sci Rep. 2025 Aug 29;15(1):31850. doi: 10.1038/s41598-025-14306-3.


DOI:10.1038/s41598-025-14306-3
PMID:40883348
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12397440/
Abstract

Deep learning models have shown remarkable success in disease detection and classification tasks, but lack transparency in their decision-making process, creating reliability and trust issues. Although traditional evaluation methods focus entirely on performance metrics such as classification accuracy, precision and recall, they fail to assess whether the models are considering relevant features for decision-making. The main objective of this work is to develop and validate a comprehensive three-stage methodology that combines conventional performance evaluation with qualitative and quantitative evaluation of explainable artificial intelligence (XAI) visualizations to assess both the accuracy and reliability of deep learning models. Eight pre-trained deep learning models - ResNet50, InceptionResNetV2, DenseNet 201, InceptionV3, EfficientNetB0, Xception, VGG16 and AlexNet,were evaluated using a three-stage methodology. First, the models are assessed using traditional classification metrics. Second, Local Interpretable Model-agnostic Explanations (LIME) is employed to visualize and quantitatively evaluate feature selection using metrics such as Intersection over Union (IoU) and the Dice Similarity Coefficient (DSC). Third, a novel overfitting ratio metric is introduced to quantify the reliance of the models on insignificant features. In the experimental analysis, ResNet50 emerged as the most accurate model, achieving 99.13% classification accuracy as well as the most reliable model demonstrating superior feature selection capabilities (IoU: 0.432, overfitting ratio: 0.284). Despite the high classification accuracies, models such as InceptionV3 and EfficientNetB0 showed poor feature selection capabilities with low IoU scores (0.295 and 0.326) and high overfitting ratios (0.544 and 0.458), indicating potential reliability issues in real-world applications. This study introduces a novel quantitative methodology for evaluating deep learning models that goes beyond traditional accuracy metrics, enabling more reliable and trustworthy AI systems for agricultural applications. This methodology is generic and researchers can explore the possibilities of extending it to other domains that require transparent and interpretable AI systems.

摘要

深度学习模型在疾病检测和分类任务中取得了显著成功,但在决策过程中缺乏透明度,引发了可靠性和信任问题。尽管传统评估方法完全侧重于分类准确率、精确率和召回率等性能指标,但它们未能评估模型在决策时是否考虑了相关特征。这项工作的主要目标是开发并验证一种全面的三阶段方法,该方法将传统性能评估与可解释人工智能(XAI)可视化的定性和定量评估相结合,以评估深度学习模型的准确性和可靠性。使用三阶段方法对八个预训练的深度学习模型——ResNet50、InceptionResNetV2、DenseNet 201、InceptionV3、EfficientNetB0、Xception、VGG16和AlexNet进行了评估。首先,使用传统分类指标对模型进行评估。其次,采用局部可解释模型无关解释(LIME)来可视化并使用交并比(IoU)和骰子相似系数(DSC)等指标对特征选择进行定量评估。第三,引入一种新颖的过拟合率指标来量化模型对无关紧要特征的依赖程度。在实验分析中,ResNet50成为最准确的模型,分类准确率达到99.13%,同时也是最可靠的模型,展现出卓越的特征选择能力(IoU:0.432,过拟合率:0.284)。尽管分类准确率很高,但InceptionV3和EfficientNetB0等模型的特征选择能力较差,IoU分数较低(分别为0.295和0.326),过拟合率较高(分别为0.544和0.458),这表明在实际应用中可能存在可靠性问题。本研究引入了一种超越传统准确率指标的新颖定量方法来评估深度学习模型,为农业应用打造更可靠、更值得信赖的人工智能系统。这种方法具有通用性,研究人员可以探索将其扩展到其他需要透明且可解释人工智能系统的领域的可能性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/b5051c26f045/41598_2025_14306_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/8edeb0fe2994/41598_2025_14306_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/3a8b593733a7/41598_2025_14306_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/34d5b9b13c4a/41598_2025_14306_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/8b2cf8378ab7/41598_2025_14306_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/4f6e32464dc1/41598_2025_14306_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/6fd7494d6e08/41598_2025_14306_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/47ec4c780aad/41598_2025_14306_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/c819ff260c8b/41598_2025_14306_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/2c8b1fe9e158/41598_2025_14306_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/8905009245be/41598_2025_14306_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/6d1641b12cca/41598_2025_14306_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/b5051c26f045/41598_2025_14306_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/8edeb0fe2994/41598_2025_14306_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/3a8b593733a7/41598_2025_14306_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/34d5b9b13c4a/41598_2025_14306_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/8b2cf8378ab7/41598_2025_14306_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/4f6e32464dc1/41598_2025_14306_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/6fd7494d6e08/41598_2025_14306_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/47ec4c780aad/41598_2025_14306_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/c819ff260c8b/41598_2025_14306_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/2c8b1fe9e158/41598_2025_14306_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/8905009245be/41598_2025_14306_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/6d1641b12cca/41598_2025_14306_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fb14/12397440/b5051c26f045/41598_2025_14306_Fig12_HTML.jpg

相似文献

[1]
Evaluation of deep learning models using explainable AI with qualitative and quantitative analysis for rice leaf disease detection.

Sci Rep. 2025-8-29

[2]
Are Artificial Intelligence Models Listening Like Cardiologists? Bridging the Gap Between Artificial Intelligence and Clinical Reasoning in Heart-Sound Classification Using Explainable Artificial Intelligence.

Bioengineering (Basel). 2025-5-22

[3]
Prescription of Controlled Substances: Benefits and Risks

2025-1

[4]
Deep Learning and Image Generator Health Tabular Data (IGHT) for Predicting Overall Survival in Patients With Colorectal Cancer: Retrospective Study.

JMIR Med Inform. 2025-8-19

[5]
Synergizing advanced algorithm of explainable artificial intelligence with hybrid model for enhanced brain tumor detection in healthcare.

Sci Rep. 2025-7-1

[6]
CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.

Sci Rep. 2025-8-31

[7]
Skin-CAD: Explainable deep learning classification of skin cancer from dermoscopic images by feature selection of dual high-level CNNs features and transfer learning.

Comput Biol Med. 2024-8

[8]
Personalized health monitoring using explainable AI: bridging trust in predictive healthcare.

Sci Rep. 2025-8-29

[9]
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.

Cochrane Database Syst Rev. 2022-5-20

[10]
Novel Artificial Intelligence-Driven Infant Meningitis Screening From High-Resolution Ultrasound Imaging.

Ultrasound Med Biol. 2025-6-28

本文引用的文献

[1]
Next-gen agriculture: integrating AI and XAI for precision crop yield predictions.

Front Plant Sci. 2025-1-8

[2]
Corn leaf disease: insightful diagnosis using VGG16 empowered by explainable AI.

Front Plant Sci. 2024-6-26

[3]
Robust diagnosis and meta visualizations of plant diseases through deep neural architecture with explainable AI.

Sci Rep. 2024-6-13

[4]
Explainable deep learning model for automatic mulberry leaf disease classification.

Front Plant Sci. 2023-9-19

[5]
Using a Resnet50 with a Kernel Attention Mechanism for Rice Disease Diagnosis.

Life (Basel). 2023-5-29

[6]
Diagnosis and application of rice diseases based on deep learning.

PeerJ Comput Sci. 2023-6-13

[7]
Stacking-based and improved convolutional neural network: a new approach in rice leaf disease identification.

Front Plant Sci. 2023-6-6

[8]
BotanicX-AI: Identification of Tomato Leaf Diseases Using an Explanation-Driven Deep-Learning Model.

J Imaging. 2023-2-20

[9]
Deep learning system for paddy plant disease detection and classification.

Environ Monit Assess. 2022-11-18

[10]
Image classification and identification for rice leaf diseases based on improved WOACW_SimpleNet.

Front Plant Sci. 2022-10-17

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索