Zhou Weipeng, Miller Timothy A
Department of Biomedical Informatics and Medical Education, School of Medicine, University of Washington-Seattle, Seattle, WA 98195, United States.
Computational Health Informatics Program, Boston Children's Hospital, Boston, MA 02215, United States.
JAMIA Open. 2024 Aug 13;7(3):ooae075. doi: 10.1093/jamiaopen/ooae075. eCollection 2024 Oct.
Clinical note section identification helps locate relevant information and could be beneficial for downstream tasks such as named entity recognition. However, the traditional supervised methods suffer from transferability issues. This study proposes a new framework for using large language models (LLMs) for section identification to overcome the limitations.
We framed section identification as question-answering and provided the section definitions in free-text. We evaluated multiple LLMs off-the-shelf without any training. We also fine-tune our LLMs to investigate how the size and the specificity of the fine-tuning dataset impacts model performance.
GPT4 achieved the highest 1 score of 0.77. The best open-source model (Tulu2-70b) achieved 0.64 and is on par with GPT3.5 (ChatGPT). GPT4 is also found to obtain 1 scores greater than 0.9 for 9 out of the 27 (33%) section types and greater than 0.8 for 15 out of 27 (56%) section types. For our fine-tuned models, we found they plateaued with an increasing size of the general domain dataset. We also found that adding a reasonable amount of section identification examples is beneficial.
These results indicate that GPT4 is nearly production-ready for section identification, and seemingly contains both knowledge of note structure and the ability to follow complex instructions, and the best current open-source LLM is catching up.
Our study shows that LLMs are promising for generalizable clinical note section identification. They have the potential to be further improved by adding section identification examples to the fine-tuning dataset.
临床记录部分识别有助于定位相关信息,并且可能有益于诸如命名实体识别等下游任务。然而,传统的监督方法存在可迁移性问题。本研究提出了一种使用大语言模型(LLMs)进行部分识别的新框架,以克服这些局限性。
我们将部分识别构建为问答形式,并以自由文本形式提供部分定义。我们评估了多个现成的大语言模型,无需任何训练。我们还对大语言模型进行微调,以研究微调数据集的大小和特异性如何影响模型性能。
GPT4的最高F1分数达到0.77。最佳的开源模型(Tulu2 - 70b)达到0.64,与GPT3.5(ChatGPT)相当。还发现GPT4在27种(33%)部分类型中有9种的F1分数大于0.9,在27种(56%)部分类型中有15种的F1分数大于0.8。对于我们微调后的模型,我们发现随着通用领域数据集规模的增加,它们趋于平稳。我们还发现添加适量的部分识别示例是有益的。
这些结果表明,GPT4在部分识别方面几乎已准备好投入生产,似乎既包含记录结构知识,又具备遵循复杂指令的能力,并且当前最佳的开源大语言模型正在迎头赶上。
我们的研究表明,大语言模型在可推广的临床记录部分识别方面很有前景。通过在微调数据集中添加部分识别示例,它们有可能得到进一步改进。