University of Wisconsin School of Medicine and Public Health, 750 Highland Ave, Madison, 53726, WI, USA.
University of Wisconsin School of Medicine and Public Health, 750 Highland Ave, Madison, 53726, WI, USA.
J Biomed Inform. 2024 Sep;157:104707. doi: 10.1016/j.jbi.2024.104707. Epub 2024 Aug 13.
Traditional knowledge-based and machine learning diagnostic decision support systems have benefited from integrating the medical domain knowledge encoded in the Unified Medical Language System (UMLS). The emergence of Large Language Models (LLMs) to supplant traditional systems poses questions of the quality and extent of the medical knowledge in the models' internal knowledge representations and the need for external knowledge sources. The objective of this study is three-fold: to probe the diagnosis-related medical knowledge of popular LLMs, to examine the benefit of providing the UMLS knowledge to LLMs (grounding the diagnosis predictions), and to evaluate the correlations between human judgments and the UMLS-based metrics for generations by LLMs.
We evaluated diagnoses generated by LLMs from consumer health questions and daily care notes in the electronic health records using the ConsumerQA and Problem Summarization datasets. Probing LLMs for the UMLS knowledge was performed by prompting the LLM to complete the diagnosis-related UMLS knowledge paths. Grounding the predictions was examined in an approach that integrated the UMLS graph paths and clinical notes in prompting the LLMs. The results were compared to prompting without the UMLS paths. The final experiments examined the alignment of different evaluation metrics, UMLS-based and non-UMLS, with human expert evaluation.
In probing the UMLS knowledge, GPT-3.5 significantly outperformed Llama2 and a simple baseline yielding an F1 score of 10.9% in completing one-hop UMLS paths for a given concept. Grounding diagnosis predictions with the UMLS paths improved the results for both models on both tasks, with the highest improvement (4%) in SapBERT score. There was a weak correlation between the widely used evaluation metrics (ROUGE and SapBERT) and human judgments.
We found that while popular LLMs contain some medical knowledge in their internal representations, augmentation with the UMLS knowledge provides performance gains around diagnosis generation. The UMLS needs to be tailored for the task to improve the LLMs predictions. Finding evaluation metrics that are aligned with human judgments better than the traditional ROUGE and BERT-based scores remains an open research question.
基于传统知识和机器学习的诊断决策支持系统已经受益于整合统一医学语言系统(UMLS)中编码的医学领域知识。大型语言模型(LLM)的出现取代了传统系统,这引发了关于模型内部知识表示中医疗知识的质量和范围以及是否需要外部知识源的问题。本研究的目的有三:探究流行的 LLM 中的与诊断相关的医学知识;考察向 LLM 提供 UMLS 知识(为诊断预测提供基础)的益处;以及评估人类判断与 LLM 基于 UMLS 的生成指标之间的相关性。
我们使用 ConsumerQA 和 Problem Summarization 数据集评估了来自电子健康记录中的消费者健康问题和日常护理记录的 LLM 生成的诊断。通过提示 LLM 完成与诊断相关的 UMLS 知识路径来探测 LLM 中的 UMLS 知识。通过将 UMLS 图路径和临床笔记集成到提示 LLM 中,考察了对预测的基础作用。将结果与没有 UMLS 路径的提示进行了比较。最后一项实验考察了不同评估指标(基于 UMLS 和非 UMLS)与人类专家评估之间的一致性。
在探测 UMLS 知识方面,GPT-3.5 在完成给定概念的单步 UMLS 路径方面明显优于 Llama2 和简单基线,F1 得分为 10.9%。通过使用 UMLS 路径来基础诊断预测,两种模型在两项任务上的结果都有所提高,SapBERT 得分的提高最高(4%)。广泛使用的评估指标(ROUGE 和 SapBERT)与人类判断之间存在弱相关性。
我们发现,尽管流行的 LLM 内部表示中包含一些医学知识,但通过增加 UMLS 知识可以提高诊断生成的性能。需要根据任务调整 UMLS 以改进 LLM 的预测。找到与人类判断更一致的评估指标,而不是传统的 ROUGE 和基于 BERT 的指标,仍然是一个悬而未决的研究问题。