Suppr超能文献

基于深度神经网络的生物医学成像可解释人工智能技术综述。

Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks.

作者信息

Nazir Sajid, Dickson Diane M, Akram Muhammad Usman

机构信息

Department of Computing, Glasgow Caledonian University, Glasgow, UK.

Department of Podiatry and Radiography, Research Centre for Health, Glasgow Caledonian University, Glasgow, UK.

出版信息

Comput Biol Med. 2023 Apr;156:106668. doi: 10.1016/j.compbiomed.2023.106668. Epub 2023 Feb 18.

Abstract

Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a 'black box' nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements. This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.

摘要

深度学习的人工智能(AI)技术凭借其出色的图像分类性能彻底改变了疾病诊断方式。尽管取得了出色的成果,但这些技术在临床实践中的广泛应用仍在以适度的速度推进。主要障碍之一是,经过训练的深度神经网络(DNN)模型可以提供预测,但关于该预测为何以及如何做出的问题仍然没有答案。这种联系对于受监管的医疗保健领域至关重要,有助于增加从业者、患者和其他利益相关者对自动诊断系统的信任。由于与自动驾驶汽车事故中的责任归属类似的健康和安全问题,深度学习在医学成像中的应用必须谨慎解释。假阳性和假阴性病例对患者福利的影响都很深远,不容忽视。此外,与传统机器学习算法不同,当前最先进的深度学习算法由复杂的互连结构、数百万个参数以及“黑箱”性质组成,几乎无法让人了解其内部工作原理,这使得情况更加复杂。可解释人工智能(XAI)技术有助于理解模型预测,从而有助于建立对系统的信任、加速疾病诊断并满足监管要求。本综述全面回顾了XAI在生物医学成像诊断这一充满前景的领域中的情况。我们还对XAI技术进行了分类,讨论了面临的开放挑战,并为XAI提供了未来发展方向,这些内容将引起临床医生、监管机构和模型开发者的兴趣。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验