Sorbonne Université, Université Sorbonne Paris Nord, INSERM, Laboratoire d'Informatique Médicale et d'Ingénierie des connaissances en e-Santé, LIMICS, Paris, France.
AP-HP, Hôpital Tenon, Paris, France.
Stud Health Technol Inform. 2024 Aug 22;316:846-850. doi: 10.3233/SHTI240544.
Text classification plays an essential role in the medical domain by organizing and categorizing vast amounts of textual data through machine learning (ML) and deep learning (DL). The adoption of Artificial Intelligence (AI) technologies in healthcare has raised concerns about the interpretability of AI models, often perceived as "black boxes." Explainable AI (XAI) techniques aim to mitigate this issue by elucidating AI model decision-making process. In this paper, we present a scoping review exploring the application of different XAI techniques in medical text classification, identifying two main types: model-specific and model-agnostic methods. Despite some positive feedback from developers, formal evaluations with medical end users of these techniques remain limited. The review highlights the necessity for further research in XAI to enhance trust and transparency in AI-driven decision-making processes in healthcare.
文本分类在医学领域中起着至关重要的作用,通过机器学习(ML)和深度学习(DL)对大量文本数据进行组织和分类。人工智能(AI)技术在医疗保健领域的应用引发了人们对 AI 模型可解释性的关注,这些模型通常被认为是“黑盒子”。可解释人工智能(XAI)技术旨在通过阐明 AI 模型的决策过程来解决这个问题。在本文中,我们进行了范围综述,探讨了不同的 XAI 技术在医学文本分类中的应用,确定了两种主要类型:特定于模型和与模型无关的方法。尽管开发人员对此类技术给予了一些积极的反馈,但对这些技术的医学最终用户进行的正式评估仍然有限。该综述强调了在 XAI 方面进一步研究的必要性,以增强医疗保健中 AI 驱动决策过程的信任和透明度。