Suppr超能文献

人工智能医学影像方法的可解释性和可理解性的范围综述。

A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging.

机构信息

School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.

Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical faculty, University of Geneva, CH, Switzerland.

出版信息

Eur J Radiol. 2023 Dec;169:111159. doi: 10.1016/j.ejrad.2023.111159. Epub 2023 Oct 21.

Abstract

PURPOSE

To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI).

METHOD

A scoping review was conducted following the Joanna Briggs Institute's methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion.

RESULTS

228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer's disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3).

CONCLUSION

The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives.

摘要

目的

综述可用于医学影像的可解释人工智能/(XAI)方法。

方法

按照 Joanna Briggs 研究所的方法进行范围界定审查。在 Pubmed、Embase、Cinhal、Web of Science、BioRxiv、MedRxiv 和 Google Scholar 上进行了搜索。纳入了 2017 年后发表的法语和英语的研究。使用了与可解释性和医学影像模态相关的关键词组合和描述符。两名独立的审查员筛选摘要、标题和全文,通过讨论解决分歧。

结果

228 项研究符合标准。XAI 的出版物数量在增加,主要针对 MRI(n=73)、放射摄影(n=47)、CT(n=46)。肺部(n=82)和脑部(n=74)病变、Covid-19(n=48)、阿尔茨海默病(n=25)、脑肿瘤(n=15)是主要解释的病变。解释以可视化(n=186)、数值(n=67)、基于规则(n=11)、文本(n=11)和基于示例(n=6)的形式呈现。常见的解释任务包括分类(n=89)、预测(n=47)、诊断(n=39)、检测(n=29)、分割(n=13)和图像质量改善(n=6)。提供的解释中最常见的是局部解释(78.1%),5.7%是全局解释,16.2%是局部和全局方法的组合。后验方法被广泛采用。所使用的术语变化不定,有时可解释性(n=207)、可解释性(n=187)、可理解性(n=112)、透明性(n=61)、可靠性(n=31)和可理解性(n=3)这几个术语不加区分地交替使用。

结论

医学影像中 XAI 出版物的数量正在增加,主要集中在应用 XAI 技术对 MRI、CT 和放射摄影进行分类和预测肺部和脑部病变。主要使用可视化和数值输出格式。术语标准化仍然是一个挑战,因为“可解释性”和“可解释性”等术语有时被不加区分地使用。未来的 XAI 发展应考虑用户的需求和观点。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验