Lu Qiuhao, Wen Andrew, Nguyen Thien, Liu Hongfang
McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, United States.
Department of AI and Informatics, Mayo Clinic, Rochester, MN, United States.
JMIR AI. 2024 Aug 6;3:e56932. doi: 10.2196/56932.
BACKGROUND: Despite their growing use in health care, pretrained language models (PLMs) often lack clinical relevance due to insufficient domain expertise and poor interpretability. A key strategy to overcome these challenges is integrating external knowledge into PLMs, enhancing their adaptability and clinical usefulness. Current biomedical knowledge graphs like UMLS (Unified Medical Language System), SNOMED CT (Systematized Medical Nomenclature for Medicine-Clinical Terminology), and HPO (Human Phenotype Ontology), while comprehensive, fail to effectively connect general biomedical knowledge with physician insights. There is an equally important need for a model that integrates diverse knowledge in a way that is both unified and compartmentalized. This approach not only addresses the heterogeneous nature of domain knowledge but also recognizes the unique data and knowledge repositories of individual health care institutions, necessitating careful and respectful management of proprietary information. OBJECTIVE: This study aimed to enhance the clinical relevance and interpretability of PLMs by integrating external knowledge in a manner that respects the diversity and proprietary nature of health care data. We hypothesize that domain knowledge, when captured and distributed as stand-alone modules, can be effectively reintegrated into PLMs to significantly improve their adaptability and utility in clinical settings. METHODS: We demonstrate that through adapters, small and lightweight neural networks that enable the integration of extra information without full model fine-tuning, we can inject diverse sources of external domain knowledge into language models and improve the overall performance with an increased level of interpretability. As a practical application of this methodology, we introduce a novel task, structured as a case study, that endeavors to capture physician knowledge in assigning cardiovascular diagnoses from clinical narratives, where we extract diagnosis-comment pairs from electronic health records (EHRs) and cast the problem as text classification. RESULTS: The study demonstrates that integrating domain knowledge into PLMs significantly improves their performance. While improvements with ClinicalBERT are more modest, likely due to its pretraining on clinical texts, BERT (bidirectional encoder representations from transformer) equipped with knowledge adapters surprisingly matches or exceeds ClinicalBERT in several metrics. This underscores the effectiveness of knowledge adapters and highlights their potential in settings with strict data privacy constraints. This approach also increases the level of interpretability of these models in a clinical context, which enhances our ability to precisely identify and apply the most relevant domain knowledge for specific tasks, thereby optimizing the model's performance and tailoring it to meet specific clinical needs. CONCLUSIONS: This research provides a basis for creating health knowledge graphs infused with physician knowledge, marking a significant step forward for PLMs in health care. Notably, the model balances integrating knowledge both comprehensively and selectively, addressing the heterogeneous nature of medical knowledge and the privacy needs of health care institutions.
背景:尽管预训练语言模型(PLMs)在医疗保健领域的应用越来越广泛,但由于领域专业知识不足和可解释性差,它们往往缺乏临床相关性。克服这些挑战的一个关键策略是将外部知识整合到PLMs中,增强其适应性和临床实用性。当前的生物医学知识图谱,如UMLS(统一医学语言系统)、SNOMED CT(医学临床术语系统命名法)和HPO(人类表型本体),虽然内容全面,但未能有效地将一般生物医学知识与医生的见解联系起来。同样迫切需要一种能够以统一且分层的方式整合各种知识的模型。这种方法不仅解决了领域知识的异构性问题,还认识到各个医疗保健机构独特的数据和知识库,因此需要谨慎且尊重地管理专有信息。 目的:本研究旨在通过以尊重医疗保健数据的多样性和专有性质的方式整合外部知识,提高PLMs的临床相关性和可解释性。我们假设,当领域知识作为独立模块被捕获和分发时,可以有效地重新整合到PLMs中,从而显著提高其在临床环境中的适应性和实用性。 方法:我们证明,通过适配器(一种小型轻量级神经网络,能够在不进行完整模型微调的情况下整合额外信息),我们可以将各种外部领域知识源注入语言模型,并在提高可解释性的同时提升整体性能。作为该方法的实际应用,我们引入了一项新颖的任务,将其构建为一个案例研究,旨在从临床叙述中获取医生在进行心血管诊断时的知识,我们从电子健康记录(EHRs)中提取诊断-评论对,并将该问题转化为文本分类。 结果:该研究表明,将领域知识整合到PLMs中可显著提高其性能。虽然ClinicalBERT的改进较为有限,可能是由于其在临床文本上的预训练,但配备知识适配器的BERT(来自Transformer的双向编码器表示)在几个指标上出人意料地与ClinicalBERT相当或超过了它。这凸显了知识适配器的有效性,并突出了它们在严格数据隐私约束环境中的潜力。这种方法还提高了这些模型在临床环境中的可解释性水平,增强了我们为特定任务精确识别和应用最相关领域知识的能力,从而优化模型性能并使其适应特定临床需求。 结论:本研究为创建融入医生知识的健康知识图谱提供了基础,标志着PLMs在医疗保健领域向前迈出了重要一步。值得注意的是,该模型在全面和有选择地整合知识之间取得了平衡,解决了医学知识的异构性问题以及医疗保健机构的隐私需求。
2025-1
J Health Organ Manag. 2025-6-30
Cochrane Database Syst Rev. 2022-5-20
Cochrane Database Syst Rev. 2015-7-27
Cochrane Database Syst Rev. 2024-12-12
J Imaging Inform Med. 2025-6-24
NPJ Digit Med. 2021-1-4
Nucleic Acids Res. 2021-1-8
Bioinformatics. 2020-2-15
Sci Data. 2016-5-24
Stud Health Technol Inform. 2006
Nucleic Acids Res. 2004-1-1