Suppr超能文献

从可解释到可理解的深度学习在医疗保健自然语言处理中的应用:离现实还有多远?

From explainable to interpretable deep learning for natural language processing in healthcare: How far from reality?

作者信息

Huang Guangming, Li Yingya, Jameel Shoaib, Long Yunfei, Papanastasiou Giorgos

机构信息

School of Computer Science and Electronic Engineering, University of Essex, Colchester, CO4 3SQ, United Kingdom.

Harvard Medical School and Boston Children's Hospital, Boston, 02115, United States.

出版信息

Comput Struct Biotechnol J. 2024 May 9;24:362-373. doi: 10.1016/j.csbj.2024.05.004. eCollection 2024 Dec.

Abstract

Deep learning (DL) has substantially enhanced natural language processing (NLP) in healthcare research. However, the increasing complexity of DL-based NLP necessitates transparent model interpretability, or at least explainability, for reliable decision-making. This work presents a thorough scoping review of explainable and interpretable DL in healthcare NLP. The term "eXplainable and Interpretable Artificial Intelligence" (XIAI) is introduced to distinguish XAI from IAI. Different models are further categorized based on their functionality (model-, input-, output-based) and scope (local, global). Our analysis shows that attention mechanisms are the most prevalent emerging IAI technique. The use of IAI is growing, distinguishing it from XAI. The major challenges identified are that most XIAI does not explore "global" modelling processes, the lack of best practices, and the lack of systematic evaluation and benchmarks. One important opportunity is to use attention mechanisms to enhance multi-modal XIAI for personalized medicine. Additionally, combining DL with causal logic holds promise. Our discussion encourages the integration of XIAI in Large Language Models (LLMs) and domain-specific smaller models. In conclusion, XIAI adoption in healthcare requires dedicated in-house expertise. Collaboration with domain experts, end-users, and policymakers can lead to ready-to-use XIAI methods across NLP and medical tasks. While challenges exist, XIAI techniques offer a valuable foundation for interpretable NLP algorithms in healthcare.

摘要

深度学习(DL)在医疗保健研究中极大地增强了自然语言处理(NLP)能力。然而,基于DL的NLP日益复杂,这就需要透明的模型可解释性,或者至少是可解释性,以便做出可靠的决策。这项工作对医疗保健NLP中可解释和可阐释的DL进行了全面的范围综述。引入了“可解释和可阐释人工智能”(XIAI)这一术语,以将可解释人工智能(XAI)与人工智能(IAI)区分开来。不同的模型根据其功能(基于模型、输入、输出)和范围(局部、全局)进一步分类。我们的分析表明,注意力机制是最普遍的新兴IAI技术。IAI的使用正在增加,这使其有别于XAI。确定的主要挑战包括,大多数XIAI没有探索“全局”建模过程、缺乏最佳实践,以及缺乏系统的评估和基准。一个重要的机会是利用注意力机制增强用于个性化医疗的多模态XIAI。此外,将DL与因果逻辑相结合也很有前景。我们的讨论鼓励将XIAI集成到大型语言模型(LLM)和特定领域的较小模型中。总之,在医疗保健领域采用XIAI需要专门的内部专业知识。与领域专家、终端用户和政策制定者合作,可以产生适用于NLP和医疗任务的即用型XIAI方法。虽然存在挑战,但XIAI技术为医疗保健中可解释的NLP算法提供了宝贵的基础。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8a8/11126530/3d9c70804048/gr001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验