Suppr超能文献

儿科中的可解释人工智能:未来的挑战

Explainable Artificial Intelligence in Paediatric: Challenges for the Future.

作者信息

Salih Ahmed M, Menegaz Gloria, Pillay Thillagavathie, Boyle Elaine M

机构信息

Department of Population Health Sciences University of Leicester Leicester UK.

William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London London UK.

出版信息

Health Sci Rep. 2024 Dec 12;7(12):e70271. doi: 10.1002/hsr2.70271. eCollection 2024 Dec.

Abstract

BACKGROUND

Explainable artificial intelligence (XAI) emerged to improve the transparency of machine learning models and increase understanding of how models make actions and decisions. It helps to present complex models in a more digestible form from a human perspective. However, XAI is still in the development stage and must be used carefully in sensitive domains including paediatrics, where misuse might have adverse consequences.

OBJECTIVE

This commentary paper discusses concerns and challenges related to implementation and interpretation of XAI methods, with the aim of rising awareness of the main concerns regarding their adoption in paediatrics.

METHODS

A comprehensive literature review was undertaken to explore the challenges of adopting XAI in paediatrics.

RESULTS

Although XAI has several favorable outcomes, its implementation in paediatrics is prone to challenges including generalizability, trustworthiness, causality and intervention, and XAI evaluation.

CONCLUSION

Paediatrics is a very sensitive domain where consequences of misinterpreting AI outcomes might be very significant. XAI should be adopted carefully with focus on evaluating the outcomes primarily by including paediatricians in the loop, enriching the pipeline by injecting domain knowledge promoting a cross-fertilization perspective aiming at filling the gaps still preventing its adoption.

摘要

背景

可解释人工智能(XAI)的出现是为了提高机器学习模型的透明度,并增进对模型如何采取行动和做出决策的理解。它有助于从人类角度以更易于理解的形式呈现复杂模型。然而,XAI仍处于发展阶段,在包括儿科在内的敏感领域必须谨慎使用,因为滥用可能会产生不良后果。

目的

本评论文章讨论了与XAI方法的实施和解释相关的问题与挑战,旨在提高对其在儿科应用中主要问题的认识。

方法

进行了全面的文献综述,以探讨在儿科采用XAI的挑战。

结果

尽管XAI有若干有利成果,但其在儿科的实施容易面临包括可推广性、可信度、因果关系和干预以及XAI评估等挑战。

结论

儿科是一个非常敏感的领域,错误解读人工智能结果的后果可能非常严重。应谨慎采用XAI,主要通过让儿科医生参与其中来评估结果,通过注入领域知识丰富流程,促进交叉融合的视角,以填补仍阻碍其应用的空白。

相似文献

1
Explainable Artificial Intelligence in Paediatric: Challenges for the Future.儿科中的可解释人工智能:未来的挑战
Health Sci Rep. 2024 Dec 12;7(12):e70271. doi: 10.1002/hsr2.70271. eCollection 2024 Dec.
6
eXplainable Artificial Intelligence (XAI) in aging clock models.老化时钟模型中的可解释人工智能 (XAI)。
Ageing Res Rev. 2024 Jan;93:102144. doi: 10.1016/j.arr.2023.102144. Epub 2023 Nov 28.
8
Explainability and white box in drug discovery.药物发现中的可解释性和白盒。
Chem Biol Drug Des. 2023 Jul;102(1):217-233. doi: 10.1111/cbdd.14262. Epub 2023 Apr 27.

本文引用的文献

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验