Salih Ahmed M, Galazzo Ilaria Boscolo, Gkontra Polyxeni, Rauseo Elisa, Lee Aaron Mark, Lekadir Karim, Radeva Petia, Petersen Steffen E, Menegaz Gloria
William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, EC1M 6BQ UK.
Department of Population Health Sciences, University of Leicester, University Rd, Leicester, LE1 7RH UK.
Artif Intell Rev. 2024;57(9):240. doi: 10.1007/s10462-024-10852-w. Epub 2024 Aug 9.
Explainable artificial intelligence (XAI) elucidates the decision-making process of complex AI models and is important in building trust in model predictions. XAI explanations themselves require evaluation as to accuracy and reasonableness and in the context of use of the underlying AI model. This review details the evaluation of XAI in cardiac AI applications and has found that, of the studies examined, 37% evaluated XAI quality using literature results, 11% used clinicians as domain-experts, 11% used proxies or statistical analysis, with the remaining 43% not assessing the XAI used at all. We aim to inspire additional studies within healthcare, urging researchers not only to apply XAI methods but to systematically assess the resulting explanations, as a step towards developing trustworthy and safe models.
The online version contains supplementary material available at 10.1007/s10462-024-10852-w.
可解释人工智能(XAI)阐明了复杂人工智能模型的决策过程,对于建立对模型预测的信任很重要。XAI解释本身需要在基础人工智能模型的使用背景下,就准确性和合理性进行评估。本综述详细介绍了在心脏人工智能应用中对XAI的评估,发现在所审查的研究中,37%使用文献结果评估XAI质量,11%将临床医生作为领域专家,11%使用代理或统计分析,其余43%根本没有评估所使用的XAI。我们旨在激发医疗保健领域的更多研究,敦促研究人员不仅要应用XAI方法,还要系统地评估所得到的解释,作为朝着开发可靠和安全模型迈出的一步。
在线版本包含可在10.1007/s10462-024-10852-w获取的补充材料。