Hou Jun, Wang Lucy Lu
Virginia Tech, Blacksburg, VA.
University of Washington, Seattle, WA.
AMIA Jt Summits Transl Sci Proc. 2025 Jun 10;2025:215-224. eCollection 2025.
Explainable AI (XAI) techniques are necessary to help clinicians make sense of AI predictions and integrate predictions into their decision-making workflow. In this work, we conduct a survey study to understand clinician preference among different XAI techniques when they are used to interpret model predictions over text-based EHR data. We implement four XAI techniques (LIME, Attention-based span highlights, exemplar patient retrieval, and free-text rationales generated by LLMs) on an outcome prediction model that uses ICU admission notes to predict a patient's likelihood of experiencing in-hospital mortality. Using these XAI implementations, we design and conduct a survey study of 32 practicing clinicians, collecting their feedback and preferences on the four techniques. We synthesize our findings into a set of recommendations describing when each of the XAI techniques may be more appropriate, their potential limitations, as well as recommendations for improvement.
可解释人工智能(XAI)技术对于帮助临床医生理解人工智能预测结果并将预测结果整合到他们的决策工作流程中是必不可少的。在这项工作中,我们进行了一项调查研究,以了解临床医生在使用不同的XAI技术来解释基于文本的电子健康记录(EHR)数据的模型预测时的偏好。我们在一个结果预测模型上实现了四种XAI技术(局部可解释模型无关解释(LIME)、基于注意力的跨度突出显示、示例患者检索以及由大型语言模型(LLM)生成的自由文本理由),该模型使用重症监护病房(ICU)入院记录来预测患者在医院死亡的可能性。利用这些XAI实现,我们设计并对32名执业临床医生进行了一项调查研究,收集他们对这四种技术的反馈和偏好。我们将研究结果综合成一组建议,描述每种XAI技术在何时可能更合适、它们的潜在局限性以及改进建议。