Salih Ahmed M, Menegaz Gloria, Pillay Thillagavathie, Boyle Elaine M
Department of Population Health Sciences University of Leicester Leicester UK.
William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London London UK.
Health Sci Rep. 2024 Dec 12;7(12):e70271. doi: 10.1002/hsr2.70271. eCollection 2024 Dec.
Explainable artificial intelligence (XAI) emerged to improve the transparency of machine learning models and increase understanding of how models make actions and decisions. It helps to present complex models in a more digestible form from a human perspective. However, XAI is still in the development stage and must be used carefully in sensitive domains including paediatrics, where misuse might have adverse consequences.
This commentary paper discusses concerns and challenges related to implementation and interpretation of XAI methods, with the aim of rising awareness of the main concerns regarding their adoption in paediatrics.
A comprehensive literature review was undertaken to explore the challenges of adopting XAI in paediatrics.
Although XAI has several favorable outcomes, its implementation in paediatrics is prone to challenges including generalizability, trustworthiness, causality and intervention, and XAI evaluation.
Paediatrics is a very sensitive domain where consequences of misinterpreting AI outcomes might be very significant. XAI should be adopted carefully with focus on evaluating the outcomes primarily by including paediatricians in the loop, enriching the pipeline by injecting domain knowledge promoting a cross-fertilization perspective aiming at filling the gaps still preventing its adoption.
可解释人工智能(XAI)的出现是为了提高机器学习模型的透明度,并增进对模型如何采取行动和做出决策的理解。它有助于从人类角度以更易于理解的形式呈现复杂模型。然而,XAI仍处于发展阶段,在包括儿科在内的敏感领域必须谨慎使用,因为滥用可能会产生不良后果。
本评论文章讨论了与XAI方法的实施和解释相关的问题与挑战,旨在提高对其在儿科应用中主要问题的认识。
进行了全面的文献综述,以探讨在儿科采用XAI的挑战。
尽管XAI有若干有利成果,但其在儿科的实施容易面临包括可推广性、可信度、因果关系和干预以及XAI评估等挑战。
儿科是一个非常敏感的领域,错误解读人工智能结果的后果可能非常严重。应谨慎采用XAI,主要通过让儿科医生参与其中来评估结果,通过注入领域知识丰富流程,促进交叉融合的视角,以填补仍阻碍其应用的空白。