Suppr超能文献

可解释人工智能在医疗保健领域中的启示作用:系统文献综述。

The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review.

机构信息

Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.

Department of Computer Science, Sukkur IBA University, Sukkur, 65200, Sindh, Pakistan.

出版信息

Comput Biol Med. 2023 Nov;166:107555. doi: 10.1016/j.compbiomed.2023.107555. Epub 2023 Oct 4.

Abstract

In domains such as medical and healthcare, the interpretability and explainability of machine learning and artificial intelligence systems are crucial for building trust in their results. Errors caused by these systems, such as incorrect diagnoses or treatments, can have severe and even life-threatening consequences for patients. To address this issue, Explainable Artificial Intelligence (XAI) has emerged as a popular area of research, focused on understanding the black-box nature of complex and hard-to-interpret machine learning models. While humans can increase the accuracy of these models through technical expertise, understanding how these models actually function during training can be difficult or even impossible. XAI algorithms such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) can provide explanations for these models, improving trust in their predictions by providing feature importance and increasing confidence in the systems. Many articles have been published that propose solutions to medical problems by using machine learning models alongside XAI algorithms to provide interpretability and explainability. In our study, we identified 454 articles published from 2018-2022 and analyzed 93 of them to explore the use of these techniques in the medical domain.

摘要

在医学和医疗保健等领域,机器学习和人工智能系统的可解释性和可解释性对于建立对其结果的信任至关重要。这些系统造成的错误,如误诊或治疗不当,可能会对患者造成严重甚至危及生命的后果。为了解决这个问题,可解释人工智能(XAI)作为一个热门的研究领域出现了,专注于理解复杂和难以解释的机器学习模型的黑盒性质。虽然人类可以通过技术专业知识提高这些模型的准确性,但理解这些模型在训练过程中的实际工作方式可能很困难,甚至是不可能的。XAI 算法,如局部可解释模型不可知解释(LIME)和 Shapley 加法解释(SHAP),可以为这些模型提供解释,通过提供特征重要性和提高对系统的信心,提高对预测的信任。已经发表了许多文章,通过使用机器学习模型和 XAI 算法来提供可解释性和可解释性,提出了解决医学问题的解决方案。在我们的研究中,我们确定了 2018 年至 2022 年期间发表的 454 篇文章,并对其中的 93 篇进行了分析,以探讨这些技术在医学领域的应用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验