University of Verona, Verona, Italy.
University of Verona, Verona, Italy.
Artif Intell Med. 2022 Nov;133:102423. doi: 10.1016/j.artmed.2022.102423. Epub 2022 Oct 9.
The rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, output to users. This concern is especially legitimate in biomedical contexts, where patient safety is of paramount importance. This position paper brings together seven researchers working in the field with different roles and perspectives, to explore in depth the concept of explainable AI, or XAI, offering a functional definition and conceptual framework or model that can be used when considering XAI. This is followed by a series of desiderata for attaining explainability in AI, each of which touches upon a key domain in biomedicine.
人工智能(AI)在计算机应用中的应用兴趣迅速增加,这引发了人们对其为用户提供可理解或可解释输出的能力(或缺乏该能力)的担忧。在患者安全至关重要的生物医学领域,这种担忧尤为合理。本立场文件汇集了七位在不同角色和视角下从事该领域研究的研究人员,深入探讨了可解释人工智能或 XAI 的概念,提供了在考虑 XAI 时可以使用的功能定义和概念框架或模型。随后,提出了实现人工智能可解释性的一系列要求,其中每一项都涉及生物医学中的一个关键领域。