Suppr超能文献

可解释和可理解的人工智能综述及其在 COVID-19 影像中的应用。

A review of explainable and interpretable AI with applications in COVID-19 imaging.

机构信息

Medical Imaging and Data Resource Center (MIDRC), The University of Chicago, Chicago, Illinois, USA.

Department of Radiology, The University of Chicago, Chicago, Illinois, USA.

出版信息

Med Phys. 2022 Jan;49(1):1-14. doi: 10.1002/mp.15359. Epub 2021 Dec 7.

Abstract

The development of medical imaging artificial intelligence (AI) systems for evaluating COVID-19 patients has demonstrated potential for improving clinical decision making and assessing patient outcomes during the recent COVID-19 pandemic. These have been applied to many medical imaging tasks, including disease diagnosis and patient prognosis, as well as augmented other clinical measurements to better inform treatment decisions. Because these systems are used in life-or-death decisions, clinical implementation relies on user trust in the AI output. This has caused many developers to utilize explainability techniques in an attempt to help a user understand when an AI algorithm is likely to succeed as well as which cases may be problematic for automatic assessment, thus increasing the potential for rapid clinical translation. AI application to COVID-19 has been marred with controversy recently. This review discusses several aspects of explainable and interpretable AI as it pertains to the evaluation of COVID-19 disease and it can restore trust in AI application to this disease. This includes the identification of common tasks that are relevant to explainable medical imaging AI, an overview of several modern approaches for producing explainable output as appropriate for a given imaging scenario, a discussion of how to evaluate explainable AI, and recommendations for best practices in explainable/interpretable AI implementation. This review will allow developers of AI systems for COVID-19 to quickly understand the basics of several explainable AI techniques and assist in the selection of an approach that is both appropriate and effective for a given scenario.

摘要

医学影像人工智能(AI)系统在评估 COVID-19 患者方面的发展,展示了在最近的 COVID-19 大流行期间改善临床决策和评估患者预后的潜力。这些系统已经应用于许多医学影像任务,包括疾病诊断和患者预后,以及增强了其他临床测量,以更好地为治疗决策提供信息。由于这些系统用于生死攸关的决策,因此临床实施依赖于用户对 AI 输出的信任。这导致许多开发者利用可解释性技术,试图帮助用户了解 AI 算法在哪些情况下可能成功,以及哪些情况下可能对自动评估产生问题,从而提高快速临床转化的潜力。最近,人工智能在 COVID-19 中的应用引起了争议。这篇综述讨论了可解释和可理解的人工智能的几个方面,因为它涉及到 COVID-19 疾病的评估,它可以恢复对人工智能在这种疾病中的应用的信任。这包括确定与可解释医学影像人工智能相关的常见任务,概述几种适用于给定成像场景的生成可解释输出的现代方法,讨论如何评估可解释人工智能,以及可解释/可理解人工智能实施的最佳实践建议。这篇综述将使 COVID-19 的人工智能系统开发者能够快速了解几种可解释人工智能技术的基础知识,并帮助选择适合给定场景的适当和有效的方法。

相似文献

1
A review of explainable and interpretable AI with applications in COVID-19 imaging.
Med Phys. 2022 Jan;49(1):1-14. doi: 10.1002/mp.15359. Epub 2021 Dec 7.
2
Evaluating Explainable Artificial Intelligence (XAI) techniques in chest radiology imaging through a human-centered Lens.
PLoS One. 2024 Oct 9;19(10):e0308758. doi: 10.1371/journal.pone.0308758. eCollection 2024.
3
Explainable artificial intelligence in emergency medicine: an overview.
Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28.
4
Explainable artificial intelligence and machine learning: novel approaches to face infectious diseases challenges.
Ann Med. 2023;55(2):2286336. doi: 10.1080/07853890.2023.2286336. Epub 2023 Nov 27.
5
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review.
IEEE Rev Biomed Eng. 2023;16:5-21. doi: 10.1109/RBME.2022.3185953. Epub 2023 Jan 5.
6
An Overview of Explainable AI Studies in the Prediction of Sepsis Onset and Sepsis Mortality.
Stud Health Technol Inform. 2024 Aug 22;316:808-812. doi: 10.3233/SHTI240534.
7
Explainable AI for Bioinformatics: Methods, Tools and Applications.
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
8
The false hope of current approaches to explainable artificial intelligence in health care.
Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9.
9
Defining factors in hospital admissions during COVID-19 using LSTM-FCA explainable model.
Artif Intell Med. 2022 Oct;132:102394. doi: 10.1016/j.artmed.2022.102394. Epub 2022 Sep 5.
10
Explainable AI in medical imaging: An overview for clinical practitioners - Beyond saliency-based XAI approaches.
Eur J Radiol. 2023 May;162:110786. doi: 10.1016/j.ejrad.2023.110786. Epub 2023 Mar 20.

引用本文的文献

1
Comparative analysis of AI support levels in clinical interpretation of traumatic pelvic radiographs.
NPJ Digit Med. 2025 Aug 13;8(1):518. doi: 10.1038/s41746-025-01923-5.
2
The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool.
AI Ethics. 2025 Apr;5(2):1499-1521. doi: 10.1007/s43681-024-00493-8. Epub 2024 May 27.
4
Enhancing interpretability and accuracy of AI models in healthcare: a comprehensive review on challenges and future directions.
Front Robot AI. 2024 Nov 28;11:1444763. doi: 10.3389/frobt.2024.1444763. eCollection 2024.
5
Integrating artificial intelligence with smartphone-based imaging for cancer detection in vivo.
Biosens Bioelectron. 2025 Mar 1;271:116982. doi: 10.1016/j.bios.2024.116982. Epub 2024 Nov 21.
7
Regulation of AI algorithms for clinical decision support: a personal opinion.
Int J Comput Assist Radiol Surg. 2024 Apr;19(4):609-611. doi: 10.1007/s11548-024-03088-0. Epub 2024 Mar 13.
8
Deciphering the Feature Representation of Deep Neural Networks for High-Performance AI.
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5273-5287. doi: 10.1109/TPAMI.2024.3363642. Epub 2024 Jul 2.
10
A critical moment in machine learning in medicine: on reproducible and interpretable learning.
Acta Neurochir (Wien). 2024 Jan 16;166(1):14. doi: 10.1007/s00701-024-05892-8.

本文引用的文献

2
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
3
Assessing the Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging.
Radiol Artif Intell. 2021 Oct 6;3(6):e200267. doi: 10.1148/ryai.2021200267. eCollection 2021 Nov.
4
Explainable Deep Learning Models in Medical Image Analysis.
J Imaging. 2020 Jun 20;6(6):52. doi: 10.3390/jimaging6060052.
5
Deep Learning Enables Accurate Diagnosis of Novel Coronavirus (COVID-19) With CT Images.
IEEE/ACM Trans Comput Biol Bioinform. 2021 Nov-Dec;18(6):2775-2780. doi: 10.1109/TCBB.2021.3065361. Epub 2021 Dec 8.
6
Image Segmentation Using Deep Learning: A Survey.
IEEE Trans Pattern Anal Mach Intell. 2022 Jul;44(7):3523-3542. doi: 10.1109/TPAMI.2021.3059968. Epub 2022 Jun 3.
7
Trade-off Predictivity and Explainability for Machine-Learning Powered Predictive Toxicology: An in-Depth Investigation with Tox21 Data Sets.
Chem Res Toxicol. 2021 Feb 15;34(2):541-549. doi: 10.1021/acs.chemrestox.0c00373. Epub 2021 Jan 29.
8
The RSNA International COVID-19 Open Radiology Database (RICORD).
Radiology. 2021 Apr;299(1):E204-E213. doi: 10.1148/radiol.2021203957. Epub 2021 Jan 5.
9
Handcrafted versus deep learning radiomics for prediction of cancer therapy response.
Lancet Digit Health. 2019 Jul;1(3):e106-e107. doi: 10.1016/S2589-7500(19)30062-7. Epub 2019 Jun 27.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验