• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

从可解释到可理解的深度学习在医疗保健自然语言处理中的应用:离现实还有多远?

From explainable to interpretable deep learning for natural language processing in healthcare: How far from reality?

作者信息

Huang Guangming, Li Yingya, Jameel Shoaib, Long Yunfei, Papanastasiou Giorgos

机构信息

School of Computer Science and Electronic Engineering, University of Essex, Colchester, CO4 3SQ, United Kingdom.

Harvard Medical School and Boston Children's Hospital, Boston, 02115, United States.

出版信息

Comput Struct Biotechnol J. 2024 May 9;24:362-373. doi: 10.1016/j.csbj.2024.05.004. eCollection 2024 Dec.

DOI:10.1016/j.csbj.2024.05.004
PMID:38800693
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11126530/
Abstract

Deep learning (DL) has substantially enhanced natural language processing (NLP) in healthcare research. However, the increasing complexity of DL-based NLP necessitates transparent model interpretability, or at least explainability, for reliable decision-making. This work presents a thorough scoping review of explainable and interpretable DL in healthcare NLP. The term "eXplainable and Interpretable Artificial Intelligence" (XIAI) is introduced to distinguish XAI from IAI. Different models are further categorized based on their functionality (model-, input-, output-based) and scope (local, global). Our analysis shows that attention mechanisms are the most prevalent emerging IAI technique. The use of IAI is growing, distinguishing it from XAI. The major challenges identified are that most XIAI does not explore "global" modelling processes, the lack of best practices, and the lack of systematic evaluation and benchmarks. One important opportunity is to use attention mechanisms to enhance multi-modal XIAI for personalized medicine. Additionally, combining DL with causal logic holds promise. Our discussion encourages the integration of XIAI in Large Language Models (LLMs) and domain-specific smaller models. In conclusion, XIAI adoption in healthcare requires dedicated in-house expertise. Collaboration with domain experts, end-users, and policymakers can lead to ready-to-use XIAI methods across NLP and medical tasks. While challenges exist, XIAI techniques offer a valuable foundation for interpretable NLP algorithms in healthcare.

摘要

深度学习(DL)在医疗保健研究中极大地增强了自然语言处理(NLP)能力。然而,基于DL的NLP日益复杂,这就需要透明的模型可解释性,或者至少是可解释性,以便做出可靠的决策。这项工作对医疗保健NLP中可解释和可阐释的DL进行了全面的范围综述。引入了“可解释和可阐释人工智能”(XIAI)这一术语,以将可解释人工智能(XAI)与人工智能(IAI)区分开来。不同的模型根据其功能(基于模型、输入、输出)和范围(局部、全局)进一步分类。我们的分析表明,注意力机制是最普遍的新兴IAI技术。IAI的使用正在增加,这使其有别于XAI。确定的主要挑战包括,大多数XIAI没有探索“全局”建模过程、缺乏最佳实践,以及缺乏系统的评估和基准。一个重要的机会是利用注意力机制增强用于个性化医疗的多模态XIAI。此外,将DL与因果逻辑相结合也很有前景。我们的讨论鼓励将XIAI集成到大型语言模型(LLM)和特定领域的较小模型中。总之,在医疗保健领域采用XIAI需要专门的内部专业知识。与领域专家、终端用户和政策制定者合作,可以产生适用于NLP和医疗任务的即用型XIAI方法。虽然存在挑战,但XIAI技术为医疗保健中可解释的NLP算法提供了宝贵的基础。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8a8/11126530/a73129caebfe/gr004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8a8/11126530/3d9c70804048/gr001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8a8/11126530/cae0578f9910/gr002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8a8/11126530/dfd2245bdd06/gr003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8a8/11126530/a73129caebfe/gr004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8a8/11126530/3d9c70804048/gr001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8a8/11126530/cae0578f9910/gr002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8a8/11126530/dfd2245bdd06/gr003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8a8/11126530/a73129caebfe/gr004.jpg

相似文献

1
From explainable to interpretable deep learning for natural language processing in healthcare: How far from reality?从可解释到可理解的深度学习在医疗保健自然语言处理中的应用:离现实还有多远?
Comput Struct Biotechnol J. 2024 May 9;24:362-373. doi: 10.1016/j.csbj.2024.05.004. eCollection 2024 Dec.
2
Toward explainable AI (XAI) for mental health detection based on language behavior.迈向基于语言行为的可解释人工智能(XAI)用于心理健康检测。
Front Psychiatry. 2023 Dec 7;14:1219479. doi: 10.3389/fpsyt.2023.1219479. eCollection 2023.
3
Rationalization for explainable NLP: a survey.可解释自然语言处理的合理化:一项综述。
Front Artif Intell. 2023 Sep 25;6:1225093. doi: 10.3389/frai.2023.1225093. eCollection 2023.
4
Exploring Explainable AI Techniques for Text Classification in Healthcare: A Scoping Review.探索可解释人工智能技术在医疗保健领域文本分类中的应用:范围综述。
Stud Health Technol Inform. 2024 Aug 22;316:846-850. doi: 10.3233/SHTI240544.
5
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
6
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis.深度学习在医学影像分析中的可解释人工智能(XAI)。
Med Image Anal. 2022 Jul;79:102470. doi: 10.1016/j.media.2022.102470. Epub 2022 May 4.
7
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
8
Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review.使用真实世界电子健康记录数据的可解释人工智能模型:系统范围界定综述。
J Am Med Inform Assoc. 2020 Jul 1;27(7):1173-1185. doi: 10.1093/jamia/ocaa053.
9
Sentiment Analysis of Customer Reviews of Food Delivery Services Using Deep Learning and Explainable Artificial Intelligence: Systematic Review.使用深度学习和可解释人工智能对食品配送服务客户评论进行情感分析:系统综述
Foods. 2022 May 21;11(10):1500. doi: 10.3390/foods11101500.
10
DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence.深演析:一种基于可解释人工智能的用于肺癌检测的可解释深度学习方法。
Comput Methods Programs Biomed. 2024 Jan;243:107879. doi: 10.1016/j.cmpb.2023.107879. Epub 2023 Oct 24.

引用本文的文献

1
The future of pharmaceuticals: Artificial intelligence in drug discovery and development.制药的未来:药物研发中的人工智能
J Pharm Anal. 2025 Aug;15(8):101248. doi: 10.1016/j.jpha.2025.101248. Epub 2025 Feb 26.
2
Recent Applications of Artificial Intelligence and Related Technical Challenges in MALDI MS and MALDI-MSI: A Mini Review.人工智能在基质辅助激光解吸/电离质谱及基质辅助激光解吸/电离质谱成像中的最新应用及相关技术挑战:一篇综述短文
Mass Spectrom (Tokyo). 2025;14(1):A0175. doi: 10.5702/massspectrometry.A0175. Epub 2025 Jun 18.
3
Machine learning and deep learning to improve overall survival prediction in cervical cancer patients.

本文引用的文献

1
Is Attention all You Need in Medical Image Analysis? A Review.注意力就是你在医学图像分析中所需要的全部吗?一个综述。
IEEE J Biomed Health Inform. 2024 Mar;28(3):1398-1411. doi: 10.1109/JBHI.2023.3348436. Epub 2024 Mar 6.
2
Publisher Correction: Large language models encode clinical knowledge.出版商更正:大语言模型编码临床知识。
Nature. 2023 Aug;620(7973):E19. doi: 10.1038/s41586-023-06455-0.
3
The imperative for regulatory oversight of large language models (or generative AI) in healthcare.对医疗保健领域的大语言模型(或生成式人工智能)进行监管监督的必要性。
机器学习与深度学习用于改善宫颈癌患者的总生存预测
Transl Cancer Res. 2025 May 30;14(5):3057-3068. doi: 10.21037/tcr-2024-2304. Epub 2025 May 26.
4
Advancing breast, lung and prostate cancer research with federated learning. A systematic review.利用联邦学习推进乳腺癌、肺癌和前列腺癌研究。一项系统综述。
NPJ Digit Med. 2025 May 27;8(1):314. doi: 10.1038/s41746-025-01591-5.
5
Machine learning models including patient-reported outcome data in oncology: a systematic literature review and analysis of their reporting quality.机器学习模型在肿瘤学中纳入患者报告结局数据:系统文献回顾和报告质量分析。
J Patient Rep Outcomes. 2024 Nov 5;8(1):126. doi: 10.1186/s41687-024-00808-7.
6
Large Language Models for Wearable Sensor-Based Human Activity Recognition, Health Monitoring, and Behavioral Modeling: A Survey of Early Trends, Datasets, and Challenges.基于可穿戴传感器的人体活动识别、健康监测和行为建模的大语言模型:早期趋势、数据集和挑战的调查。
Sensors (Basel). 2024 Aug 4;24(15):5045. doi: 10.3390/s24155045.
NPJ Digit Med. 2023 Jul 6;6(1):120. doi: 10.1038/s41746-023-00873-0.
4
Training a Deep Contextualized Language Model for International Classification of Diseases, 10th Revision Classification via Federated Learning: Model Development and Validation Study.通过联邦学习训练用于国际疾病分类第10次修订版分类的深度情境化语言模型:模型开发与验证研究
JMIR Med Inform. 2022 Nov 10;10(11):e41342. doi: 10.2196/41342.
5
Empowering digital pathology applications through explainable knowledge extraction tools.通过可解释的知识提取工具增强数字病理学应用。
J Pathol Inform. 2022 Sep 15;13:100139. doi: 10.1016/j.jpi.2022.100139. eCollection 2022.
6
Discrete-time survival analysis in the critically ill: a deep learning approach using heterogeneous data.危重症患者的离散时间生存分析:一种使用异构数据的深度学习方法。
NPJ Digit Med. 2022 Sep 14;5(1):142. doi: 10.1038/s41746-022-00679-6.
7
Multi-Aspect Deep Active Attention Network for Healthcare Explainable Adoption.用于医疗保健可解释采用的多方面深度主动注意力网络
IEEE J Biomed Health Inform. 2023 Apr;27(4):1709-1717. doi: 10.1109/JBHI.2022.3204633. Epub 2023 Apr 4.
8
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
9
Deep Neural Networks for Simultaneously Capturing Public Topics and Sentiments During a Pandemic: Application on a COVID-19 Tweet Data Set.用于在大流行期间同时捕捉公共话题和情绪的深度神经网络:在COVID-19推文数据集上的应用。
JMIR Med Inform. 2022 May 25;10(5):e34306. doi: 10.2196/34306.
10
Word-level text highlighting of medical texts for telehealth services.医疗保健服务中的医学文本的词级文本突出显示。
Artif Intell Med. 2022 May;127:102284. doi: 10.1016/j.artmed.2022.102284. Epub 2022 Mar 23.