Brankovic Aida, Cook David, Rahman Jessica, Delaforce Alana, Li Jane, Magrabi Farah, Cabitza Federico, Coiera Enrico, Bradford DanaKai
CSIRO's Australian eHealth Research Centre, Herston, QLD, Australia.
The University of Queensland, Brisbane, QLD, Australia.
NPJ Digit Med. 2025 Jun 14;8(1):364. doi: 10.1038/s41746-025-01764-2.
The rapid growth of clinical explainable AI (XAI) models raised concerns over unclear purposes and false hope regarding explanations. Currently, no standardised metrics exist for XAI evaluation. We developed a clinician-informed, 14-item checklist including clinical, machine and decision attributes. This is the first step toward XAI standardisation and transparent reporting XAI methods to enhance trust, reduce risks, foster AI adoption, and improve decisions to determine the true clinical potential of applied XAI.
临床可解释人工智能(XAI)模型的快速发展引发了人们对其目的不明确以及解释带来的虚假希望的担忧。目前,XAI评估尚无标准化指标。我们制定了一份由临床医生提供信息的、包含14项内容的清单,涵盖临床、机器和决策属性。这是迈向XAI标准化以及透明报告XAI方法的第一步,目的是增强信任、降低风险、促进人工智能的应用,并改进决策,以确定应用XAI的真正临床潜力。