Suppr超能文献

用于医学影像分析的局部可解释模型无关解释方法:一项系统文献综述。

Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review.

作者信息

Hassan Shahab Ul, Abdulkadir Said Jadid, Zahid M Soperi Mohd, Al-Selwi Safwan Mahmood

机构信息

Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Centre for Intelligent Signal & Imaging Research (CISIR), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.

Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia; Center for Research in Data Science (CeRDaS), Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia.

出版信息

Comput Biol Med. 2025 Feb;185:109569. doi: 10.1016/j.compbiomed.2024.109569. Epub 2024 Dec 19.

Abstract

BACKGROUND

The interpretability and explainability of machine learning (ML) and artificial intelligence systems are critical for generating trust in their outcomes in fields such as medicine and healthcare. Errors generated by these systems, such as inaccurate diagnoses or treatments, can have serious and even life-threatening effects on patients. Explainable Artificial Intelligence (XAI) is emerging as an increasingly significant area of research nowadays, focusing on the black-box aspect of sophisticated and difficult-to-interpret ML algorithms. XAI techniques such as Local Interpretable Model-Agnostic Explanations (LIME) can give explanations for these models, raising confidence in the systems and improving trust in their predictions. Numerous works have been published that respond to medical problems through the use of ML models in conjunction with XAI algorithms to give interpretability and explainability. The primary objective of the study is to evaluate the performance of the newly emerging LIME techniques within healthcare domains that require more attention in the realm of XAI research.

METHOD

A systematic search was conducted in numerous databases (Scopus, Web of Science, IEEE Xplore, ScienceDirect, MDPI, and PubMed) that identified 1614 peer-reviewed articles published between 2019 and 2023.

RESULTS

52 articles were selected for detailed analysis that showed a growing trend in the application of LIME techniques in healthcare, with significant improvements in the interpretability of ML models used for diagnostic and prognostic purposes.

CONCLUSION

The findings suggest that the integration of XAI techniques, particularly LIME, enhances the transparency and trustworthiness of AI systems in healthcare, thereby potentially improving patient outcomes and fostering greater acceptance of AI-driven solutions among medical professionals.

摘要

背景

机器学习(ML)和人工智能系统的可解释性对于在医学和医疗保健等领域产生对其结果的信任至关重要。这些系统产生的错误,如不准确的诊断或治疗,可能对患者产生严重甚至危及生命的影响。可解释人工智能(XAI)如今正成为一个日益重要的研究领域,专注于复杂且难以解释的机器学习算法的黑箱方面。诸如局部可解释模型无关解释(LIME)等XAI技术可以为这些模型提供解释,增强对系统的信心并提高对其预测的信任。已经发表了许多通过结合使用机器学习模型和XAI算法来解决医学问题以提供可解释性和可说明性的作品。该研究的主要目标是评估在XAI研究领域中需要更多关注的医疗保健领域内新兴的LIME技术的性能。

方法

在多个数据库(Scopus、科学网、IEEE Xplore、ScienceDirect、MDPI和PubMed)中进行了系统检索,确定了2019年至2023年期间发表的1614篇同行评审文章。

结果

选择了52篇文章进行详细分析,这些文章显示LIME技术在医疗保健中的应用呈增长趋势,用于诊断和预后目的的机器学习模型的可解释性有显著提高。

结论

研究结果表明,XAI技术,特别是LIME的整合提高了医疗保健中人工智能系统的透明度和可信度,从而有可能改善患者预后并促进医疗专业人员对人工智能驱动解决方案的更大接受度。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验