Ben Shoham Ofir, Rappoport Nadav
Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Israel.
PLOS Digit Health. 2024 Dec 6;3(12):e0000680. doi: 10.1371/journal.pdig.0000680. eCollection 2024 Dec.
We present Clinical Prediction with Large Language Models (CPLLM), a method that involves fine-tuning a pre-trained Large Language Model (LLM) for predicting clinical disease and readmission. We utilized quantization and fine-tuned the LLM using prompts. For diagnostic predictions, we predicted whether patients would be diagnosed with a target disease during their next visit or in the subsequent diagnosis, leveraging their historical medical records. We compared our results to various baselines, including Retain and Med-BERT, the latter of which is the current state-of-the-art model for disease prediction using temporal structured EHR data. In addition, we also evaluated CPLLM's utility in predicting hospital readmission and compared our method's performance with benchmark baselines. Our experiments ultimately revealed that our proposed method, CPLLM, surpasses all the tested models in terms of PR-AUC and ROC-AUC metrics, providing state-of-the-art performance as a tool for predicting disease diagnosis and patient hospital readmission without requiring pre-training on medical data. Such a method can be easily implemented and integrated into the clinical workflow to help care providers plan next steps for their patients.
我们提出了使用大语言模型进行临床预测(Clinical Prediction with Large Language Models,CPLLM)的方法,该方法涉及对预训练大语言模型(LLM)进行微调以预测临床疾病和再入院情况。我们利用量化技术并使用提示对LLM进行微调。对于诊断预测,我们利用患者的历史病历预测患者在下次就诊或后续诊断中是否会被诊断出患有目标疾病。我们将结果与各种基线进行了比较,包括Retain和Med-BERT,后者是目前使用时间结构化电子健康记录(EHR)数据进行疾病预测的最先进模型。此外,我们还评估了CPLLM在预测医院再入院方面的效用,并将我们方法的性能与基准基线进行了比较。我们的实验最终表明,我们提出的方法CPLLM在PR-AUC和ROC-AUC指标方面超过了所有测试模型,作为一种无需对医疗数据进行预训练即可预测疾病诊断和患者医院再入院的工具,提供了最先进的性能。这种方法可以轻松实现并集成到临床工作流程中,以帮助护理人员为患者规划下一步治疗。