Hsu Enshuo, Roberts Kirk
University of Texas Health Science Center at Houston.
Res Sq. 2024 Jun 28:rs.3.rs-4559971. doi: 10.21203/rs.3.rs-4559971/v1.
The performance of deep learning-based natural language processing systems is based on large amounts of labeled training data which, in the clinical domain, are not easily available or affordable. Weak supervision and in-context learning offer partial solutions to this issue, particularly using large language models (LLMs), but their performance still trails traditional supervised methods with moderate amounts of gold-standard data. In particular, inferencing with LLMs is computationally heavy. We propose an approach leveraging fine-tuning LLMs and weak supervision with virtually no domain knowledge that still achieves consistently dominant performance. Using a prompt-based approach, the LLM is used to generate weakly-labeled data for training a downstream BERT model. The weakly supervised model is then further fine-tuned on small amounts of gold standard data. We evaluate this approach using Llama2 on three different n2c2 datasets. With no more than 10 gold standard notes, our final BERT models weakly supervised by fine-tuned Llama2-13B consistently outperformed out-of-the-box PubMedBERT by 4.7-47.9% in F1 scores. With only 50 gold standard notes, our models achieved close performance to fully fine-tuned systems.
J Am Med Inform Assoc. 2025-3-1
BMC Med Inform Decis Mak. 2022-7-7
BMC Med Inform Decis Mak. 2021-11-9
JAMIA Open. 2023-4-22
J Biomed Inform. 2023-9
J Biomed Inform. 2023-8
BMC Bioinformatics. 2023-7-19
BMC Med Inform Decis Mak. 2023-5-5
JAMIA Open. 2023-4-22
BMC Med Inform Decis Mak. 2022-7-7
BMC Med Inform Decis Mak. 2021-11-9
Front Res Metr Anal. 2021-8-2
Int J Environ Res Public Health. 2021-7-22