• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于医学影像分析的局部可解释模型无关解释方法:一项系统文献综述。

Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review.

作者信息

Hassan Shahab Ul, Abdulkadir Said Jadid, Zahid M Soperi Mohd, Al-Selwi Safwan Mahmood

机构信息

Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Centre for Intelligent Signal & Imaging Research (CISIR), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.

Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Center for Research in Data Science (CeRDaS), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.

出版信息

Comput Biol Med. 2025 Feb;185:109569. doi: 10.1016/j.compbiomed.2024.109569. Epub 2024 Dec 19.

DOI:10.1016/j.compbiomed.2024.109569
PMID:39705792
Abstract

BACKGROUND

The interpretability and explainability of machine learning (ML) and artificial intelligence systems are critical for generating trust in their outcomes in fields such as medicine and healthcare. Errors generated by these systems, such as inaccurate diagnoses or treatments, can have serious and even life-threatening effects on patients. Explainable Artificial Intelligence (XAI) is emerging as an increasingly significant area of research nowadays, focusing on the black-box aspect of sophisticated and difficult-to-interpret ML algorithms. XAI techniques such as Local Interpretable Model-Agnostic Explanations (LIME) can give explanations for these models, raising confidence in the systems and improving trust in their predictions. Numerous works have been published that respond to medical problems through the use of ML models in conjunction with XAI algorithms to give interpretability and explainability. The primary objective of the study is to evaluate the performance of the newly emerging LIME techniques within healthcare domains that require more attention in the realm of XAI research.

METHOD

A systematic search was conducted in numerous databases (Scopus, Web of Science, IEEE Xplore, ScienceDirect, MDPI, and PubMed) that identified 1614 peer-reviewed articles published between 2019 and 2023.

RESULTS

52 articles were selected for detailed analysis that showed a growing trend in the application of LIME techniques in healthcare, with significant improvements in the interpretability of ML models used for diagnostic and prognostic purposes.

CONCLUSION

The findings suggest that the integration of XAI techniques, particularly LIME, enhances the transparency and trustworthiness of AI systems in healthcare, thereby potentially improving patient outcomes and fostering greater acceptance of AI-driven solutions among medical professionals.

摘要

背景

机器学习(ML)和人工智能系统的可解释性对于在医学和医疗保健等领域产生对其结果的信任至关重要。这些系统产生的错误,如不准确的诊断或治疗,可能对患者产生严重甚至危及生命的影响。可解释人工智能(XAI)如今正成为一个日益重要的研究领域,专注于复杂且难以解释的机器学习算法的黑箱方面。诸如局部可解释模型无关解释(LIME)等XAI技术可以为这些模型提供解释,增强对系统的信心并提高对其预测的信任。已经发表了许多通过结合使用机器学习模型和XAI算法来解决医学问题以提供可解释性和可说明性的作品。该研究的主要目标是评估在XAI研究领域中需要更多关注的医疗保健领域内新兴的LIME技术的性能。

方法

在多个数据库(Scopus、科学网、IEEE Xplore、ScienceDirect、MDPI和PubMed)中进行了系统检索,确定了2019年至2023年期间发表的1614篇同行评审文章。

结果

选择了52篇文章进行详细分析,这些文章显示LIME技术在医疗保健中的应用呈增长趋势,用于诊断和预后目的的机器学习模型的可解释性有显著提高。

结论

研究结果表明,XAI技术,特别是LIME的整合提高了医疗保健中人工智能系统的透明度和可信度,从而有可能改善患者预后并促进医疗专业人员对人工智能驱动解决方案的更大接受度。

相似文献

1
Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review.用于医学影像分析的局部可解释模型无关解释方法:一项系统文献综述。
Comput Biol Med. 2025 Feb;185:109569. doi: 10.1016/j.compbiomed.2024.109569. Epub 2024 Dec 19.
2
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
3
The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review.可解释人工智能在医疗保健领域中的启示作用:系统文献综述。
Comput Biol Med. 2023 Nov;166:107555. doi: 10.1016/j.compbiomed.2023.107555. Epub 2023 Oct 4.
4
Synergizing advanced algorithm of explainable artificial intelligence with hybrid model for enhanced brain tumor detection in healthcare.将可解释人工智能的先进算法与混合模型相结合,以增强医疗保健中脑肿瘤的检测。
Sci Rep. 2025 Jul 1;15(1):20489. doi: 10.1038/s41598-025-07524-2.
5
Explainable AI-driven prediction of APE1 inhibitors: enhancing cancer therapy with machine learning models and feature importance analysis.可解释人工智能驱动的APE1抑制剂预测:利用机器学习模型和特征重要性分析增强癌症治疗
Mol Divers. 2025 Feb 21. doi: 10.1007/s11030-025-11133-6.
6
Explainable machine learning for breast cancer diagnosis from mammography and ultrasound images: a systematic review.从乳腺 X 光和超声图像进行乳腺癌诊断的可解释机器学习:系统综述。
BMJ Health Care Inform. 2024 Feb 2;31(1):e100954. doi: 10.1136/bmjhci-2023-100954.
7
The measurement of collaboration within healthcare settings: a systematic review of measurement properties of instruments.医疗机构内协作的测量:对测量工具属性的系统评价
JBI Database System Rev Implement Rep. 2016 Apr;14(4):138-97. doi: 10.11124/JBISRIR-2016-2159.
8
Advancing personalized healthcare: leveraging explainable AI for BPPV risk assessment.推进个性化医疗:利用可解释人工智能进行良性阵发性位置性眩晕风险评估。
Health Inf Sci Syst. 2024 Nov 24;13(1):1. doi: 10.1007/s13755-024-00317-3. eCollection 2025 Dec.
9
Designing Clinical Decision Support Systems (CDSS)-A User-Centered Lens of the Design Characteristics, Challenges, and Implications: Systematic Review.设计临床决策支持系统(CDSS)——基于用户中心视角的设计特征、挑战及影响:系统评价
J Med Internet Res. 2025 Jun 20;27:e63733. doi: 10.2196/63733.
10
Prediction of disease comorbidity using explainable artificial intelligence and machine learning techniques: A systematic review.利用可解释人工智能和机器学习技术预测疾病共病:系统评价。
Int J Med Inform. 2023 Jul;175:105088. doi: 10.1016/j.ijmedinf.2023.105088. Epub 2023 May 4.

引用本文的文献

1
Application of artificial intelligence in oral potentially malignant disorders: current opinions and future barriers.人工智能在口腔潜在恶性疾病中的应用:当前观点与未来障碍
Clin Transl Oncol. 2025 Aug 30. doi: 10.1007/s12094-025-04043-4.
2
Hybrid model integration with explainable AI for brain tumor diagnosis: a unified approach to MRI analysis and prediction.用于脑肿瘤诊断的可解释人工智能混合模型集成:MRI分析与预测的统一方法
Sci Rep. 2025 Jul 1;15(1):20542. doi: 10.1038/s41598-025-06455-2.
3
Multi-source data fusion-based knowledge transfer for unmanned aerial vehicle flight data anomaly detection and recovery.
基于多源数据融合的无人机飞行数据异常检测与恢复知识转移
Sci Rep. 2025 Jul 1;15(1):20924. doi: 10.1038/s41598-025-05322-4.
4
Explainable Artificial Intelligence in Radiological Cardiovascular Imaging-A Systematic Review.放射心血管成像中的可解释人工智能——一项系统综述
Diagnostics (Basel). 2025 May 31;15(11):1399. doi: 10.3390/diagnostics15111399.
5
Optimizing the power of AI for fracture detection: from blind spots to breakthroughs.优化人工智能在骨折检测中的效能:从盲点到突破
Skeletal Radiol. 2025 May 23. doi: 10.1007/s00256-025-04951-0.
6
The role of nanomedicine and artificial intelligence in cancer health care: individual applications and emerging integrations-a narrative review.纳米医学与人工智能在癌症医疗中的作用:个体应用与新兴整合——一项叙述性综述
Discov Oncol. 2025 May 8;16(1):697. doi: 10.1007/s12672-025-02469-4.
7
Advancing the diagnosis of major depressive disorder: Integrating neuroimaging and machine learning.推进重度抑郁症的诊断:整合神经影像学与机器学习
World J Psychiatry. 2025 Mar 19;15(3):103321. doi: 10.5498/wjp.v15.i3.103321.