• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

医学成像中可解释人工智能的现状与未来发展方向

Current status and future directions of explainable artificial intelligence in medical imaging.

作者信息

Saw Shier Nee, Yan Yet Yen, Ng Kwan Hoong

机构信息

Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, Kuala Lumpur 50603, Malaysia.

Department of Radiology, Changi General Hospital, Singapore; Radiological Sciences ACP, Duke-NUS Medical School, Singapore; Present Address: Department of Diagnostic Radiology, Mount Elizabeth Hospital, 3 Mount Elizabeth, Singapore 228510, Republic of Singapore.

出版信息

Eur J Radiol. 2025 Feb;183:111884. doi: 10.1016/j.ejrad.2024.111884. Epub 2024 Dec 6.

DOI:10.1016/j.ejrad.2024.111884
PMID:39667118
Abstract

The inherent "black box" nature of AI algorithms presents a substantial barrier to the widespread adoption of the technology in clinical settings, leading to a lack of trust among users. This review begins by examining the foundational stages involved in the interpretation of medical images by radiologists and clinicians, encompassing both type 1 (fast thinking - ability of the brain to think and act intuitively) and type 2 (slow analytical - slow analytical, laborious approach to decision-making) decision-making processes. The discussion then delves into current Explainable AI (XAI) approaches, exploring both inherent and post-hoc explainability for medical imaging applications and highlighting the milestones achieved. XAI in medicine refers to AI system designed to provide transparent, interpretable, and understandable reasoning behind AI predictions or decisions. Additionally, the paper showcases some commercial AI medical systems that offer explanations through features such as heatmaps. Opportunities, challenges and potential avenues for advancing the field are also addressed. In conclusion, the review observes that state-of-the-art XAI methods are not mature enough for implementation, as the explanations they provide are challenging for medical experts to comprehend. Deeper understanding of the cognitive mechanisms by medical professionals is important in aiming to develop more interpretable XAI methods.

摘要

人工智能算法固有的“黑匣子”性质对该技术在临床环境中的广泛应用构成了重大障碍,导致用户缺乏信任。本综述首先考察放射科医生和临床医生解读医学图像所涉及的基础阶段,包括1型(快速思维——大脑直观思考和行动的能力)和2型(慢速分析——缓慢分析、费力的决策方法)决策过程。接着讨论深入探讨当前的可解释人工智能(XAI)方法,探索医学成像应用中的内在可解释性和事后可解释性,并突出所取得的里程碑。医学中的XAI是指旨在为人工智能预测或决策背后提供透明、可解释和易懂推理的人工智能系统。此外,本文展示了一些通过热图等功能提供解释的商业人工智能医疗系统。还讨论了推动该领域发展的机遇、挑战和潜在途径。总之,该综述指出,由于目前最先进的XAI方法所提供的解释对医学专家来说难以理解,因此还不够成熟,无法实施。医学专业人员对认知机制的更深入理解对于开发更具可解释性的XAI方法很重要。

相似文献

1
Current status and future directions of explainable artificial intelligence in medical imaging.医学成像中可解释人工智能的现状与未来发展方向
Eur J Radiol. 2025 Feb;183:111884. doi: 10.1016/j.ejrad.2024.111884. Epub 2024 Dec 6.
2
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
3
Explainable AI in medical imaging: An overview for clinical practitioners - Saliency-based XAI approaches.可解释人工智能在医学影像中的应用:临床医师的概述——基于显著度的 XAI 方法。
Eur J Radiol. 2023 May;162:110787. doi: 10.1016/j.ejrad.2023.110787. Epub 2023 Mar 21.
4
Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis.揭开黑箱:医学图像分析中可解释人工智能的系统综述。
Comput Struct Biotechnol J. 2024 Aug 12;24:542-560. doi: 10.1016/j.csbj.2024.08.005. eCollection 2024 Dec.
5
Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space.用于脑机接口的可解释人工智能方法:综述与设计空间
J Neural Eng. 2024 Aug 8;21(4). doi: 10.1088/1741-2552/ad6593.
6
Explainable AI in Diagnostic Radiology for Neurological Disorders: A Systematic Review, and What Doctors Think About It.用于神经系统疾病诊断放射学的可解释人工智能:系统评价及医生对此的看法。
Diagnostics (Basel). 2025 Jan 13;15(2):168. doi: 10.3390/diagnostics15020168.
7
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.可解释人工智能在诊断与手术中的应用。
Diagnostics (Basel). 2022 Jan 19;12(2):237. doi: 10.3390/diagnostics12020237.
8
Explainability and white box in drug discovery.药物发现中的可解释性和白盒。
Chem Biol Drug Des. 2023 Jul;102(1):217-233. doi: 10.1111/cbdd.14262. Epub 2023 Apr 27.
9
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
10
Current methods in explainable artificial intelligence and future prospects for integrative physiology.可解释人工智能的当前方法与整合生理学的未来前景。
Pflugers Arch. 2025 Apr;477(4):513-529. doi: 10.1007/s00424-025-03067-7. Epub 2025 Feb 25.

引用本文的文献

1
From detection to decision: Can deep learning-based CADx meet the challenge of incidental pulmonary nodules?从检测到决策:基于深度学习的计算机辅助诊断能否应对偶然发现的肺结节的挑战?
Eur Radiol. 2025 Sep 4. doi: 10.1007/s00330-025-11935-0.
2
Histological Image Classification Between Follicular Lymphoma and Reactive Lymphoid Tissue Using Deep Learning and Explainable Artificial Intelligence (XAI).使用深度学习和可解释人工智能(XAI)对滤泡性淋巴瘤和反应性淋巴组织进行组织学图像分类
Cancers (Basel). 2025 Jul 22;17(15):2428. doi: 10.3390/cancers17152428.
3
Artificial Intelligence in Cardiovascular Imaging: Current Landscape, Clinical Impact, and Future Directions.
心血管成像中的人工智能:现状、临床影响及未来方向。
Discoveries (Craiova). 2025 Jun 30;13(1):e211. doi: 10.15190/d.2025.10. eCollection 2025 Apr-Jun.
4
⁠Advancing ethical AI in healthcare through interpretability.通过可解释性推动医疗保健领域的道德人工智能发展。
Patterns (N Y). 2025 Jun 13;6(6):101290. doi: 10.1016/j.patter.2025.101290.
5
A comprehensive review of machine learning for heart disease prediction: challenges, trends, ethical considerations, and future directions.心脏病预测的机器学习综合综述:挑战、趋势、伦理考量及未来方向。
Front Artif Intell. 2025 May 13;8:1583459. doi: 10.3389/frai.2025.1583459. eCollection 2025.