Suppr超能文献

平衡“黑匣子”系统与可解释人工智能的必要性:放射学中的必要实践。

The need for balancing 'black box' systems and explainable artificial intelligence: A necessary implementation in radiology.

作者信息

De-Giorgio Fabio, Benedetti Beatrice, Mancino Matteo, Sala Evis, Pascali Vincenzo L

机构信息

Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Department of Healthcare Surveillance and Bioethics, Section of Legal Medicine, Università Cattolica del Sacro Cuore, Rome, Italy.

Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Department of Healthcare Surveillance and Bioethics, Section of Legal Medicine, Università Cattolica del Sacro Cuore, Rome, Italy.

出版信息

Eur J Radiol. 2025 Apr;185:112014. doi: 10.1016/j.ejrad.2025.112014. Epub 2025 Feb 26.

Abstract

Radiology is one of the medical specialties most significantly impacted by Artificial Intelligence (AI). AI systems, particularly those employing machine and deep learning, excel in processing large datasets and comparing images from similar contexts, fulfilling radiological demands. However, the implementation of AI in radiology presents notable challenges, including concerns about data privacy, informed consent, and the potential for external interferences affecting decision-making processes. Biases represent another critical issue, often stemming from unrepresentative datasets or inadequate system training, which can lead to distorted outcomes and exacerbate healthcare inequalities. Additionally, generative AI systems may produce 'hallucinations' arising from their reliance on probabilistic modeling without the ability to distinguish between true and false information. Such risks raise ethical and legal questions, especially when AI-induced errors harm patient health. Concerning liability for medical errors involving AI, healthcare professionals currently retain full accountability for their decisions. AI systems remain tools to support, not replace, human expertise and judgment. Nevertheless, the "black box" nature of many AI models - wherein the reasoning behind outputs remains opaque - limits the possibility of fully informed consent. We advocate for prioritizing Explainable Artificial Intelligence (XAI) in radiology. While potentially less performant than black-box models, XAI enhances transparency, allowing patients to understand how their data is used and how AI influences clinical decisions, aligning with ethical standards.

摘要

放射学是受人工智能(AI)影响最为显著的医学专业之一。人工智能系统,尤其是那些采用机器学习和深度学习的系统,在处理大型数据集以及比较相似背景下的图像方面表现出色,满足了放射学的需求。然而,人工智能在放射学中的应用也带来了显著挑战,包括对数据隐私、知情同意以及影响决策过程的外部干扰可能性的担忧。偏差是另一个关键问题,通常源于代表性不足的数据集或系统训练不充分,这可能导致结果失真并加剧医疗保健不平等。此外,生成式人工智能系统可能会因其依赖概率模型而产生“幻觉”,无法区分真假信息。这些风险引发了伦理和法律问题,尤其是当人工智能导致的错误损害患者健康时。关于涉及人工智能的医疗错误的责任,医疗保健专业人员目前仍对其决策承担全部责任。人工智能系统仍然是支持而非取代人类专业知识和判断的工具。然而,许多人工智能模型的“黑箱”性质——即输出背后的推理仍然不透明——限制了完全知情同意的可能性。我们主张在放射学中优先考虑可解释人工智能(XAI)。虽然XAI的性能可能不如黑箱模型,但它提高了透明度,使患者能够了解其数据的使用方式以及人工智能如何影响临床决策,符合伦理标准。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验