Big Data in Medicine, Department of Health Services Research, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.
Stud Health Technol Inform. 2024 Aug 22;316:766-770. doi: 10.3233/SHTI240525.
In recent years, artificial intelligence (AI) has gained momentum in many fields of daily live. In healthcare, AI can be used for diagnosing or predicting illnesses. However, explainable AI (XAI) is needed to ensure that users understand how the algorithm arrives at a decision. In our research project, machine learning methods are used for individual risk prediction of hospital-onset bacteremia (HOB). This paper presents a vision on a step-wise process for implementation and evaluation of user-centered XAI for risk prediction of HOB. An initial requirement analysis revealed first insights on the users' needs of explainability to use and trust such risk prediction applications. The findings were then used to propose step-wise process towards a user-centered evaluation.
近年来,人工智能(AI)在日常生活的许多领域都得到了迅猛的发展。在医疗保健领域,人工智能可用于诊断或预测疾病。然而,需要可解释的人工智能(XAI)来确保用户理解算法是如何做出决策的。在我们的研究项目中,机器学习方法被用于个体医院获得性菌血症(HOB)的风险预测。本文提出了一个关于分步实施和评估以用户为中心的用于 HOB 风险预测的 XAI 的愿景。初步的需求分析揭示了用户对使用和信任此类风险预测应用程序的可解释性需求的初步见解。然后,利用这些发现提出了一个分步骤的过程,以实现以用户为中心的评估。