Suppr超能文献

针对罕见病概念规范化微调大语言模型。

Fine-tuning Large Language Models for Rare Disease Concept Normalization.

作者信息

Wang Andy, Liu Cong, Yang Jingye, Weng Chunhua

机构信息

Peddie School, Hightstown, NJ, USA.

Department of Biomedical Informatics, Columbia University, New York, NY, USA.

出版信息

bioRxiv. 2024 Jun 13:2023.12.28.573586. doi: 10.1101/2023.12.28.573586.

Abstract

OBJECTIVE

We aim to develop a novel method for rare disease concept normalization by fine-tuning Llama 2, an open-source large language model (LLM), using a domain-specific corpus sourced from the Human Phenotype Ontology (HPO).

METHODS

We developed an in-house template-based script to generate two corpora for fine-tuning. The first (NAME) contains standardized HPO names, sourced from the HPO vocabularies, along with their corresponding identifiers. The second (NAME+SYN) includes HPO names and half of the concept's synonyms as well as identifiers. Subsequently, we fine-tuned Llama2 (Llama2-7B) for each sentence set and conducted an evaluation using a range of sentence prompts and various phenotype terms.

RESULTS

When the phenotype terms for normalization were included in the fine-tuning corpora, both models demonstrated nearly perfect performance, averaging over 99% accuracy. In comparison, ChatGPT-3.5 has only ~20% accuracy in identifying HPO IDs for phenotype terms. When single-character typos were introduced in the phenotype terms, the accuracy of NAME and NAME+SYN is 10.2% and 36.1%, respectively, but increases to 61.8% (NAME+SYN) with additional typo-specific fine-tuning. For terms sourced from HPO vocabularies as unseen synonyms, the NAME model achieved 11.2% accuracy, while the NAME+SYN model achieved 92.7% accuracy.

CONCLUSION

Our fine-tuned models demonstrate ability to normalize phenotype terms unseen in the fine-tuning corpus, including misspellings, synonyms, terms from other ontologies, and laymen's terms. Our approach provides a solution for the use of LLM to identify named medical entities from the clinical narratives, while successfully normalizing them to standard concepts in a controlled vocabulary.

摘要

目的

我们旨在开发一种新方法,通过使用从人类表型本体(HPO)获取的特定领域语料库对开源大语言模型(LLM)Llama 2进行微调,以实现罕见病概念归一化。

方法

我们开发了一个基于模板的内部脚本,以生成两个用于微调的语料库。第一个(NAME)包含从HPO词汇表中获取的标准化HPO名称及其相应标识符。第二个(NAME+SYN)包括HPO名称、概念同义词的一半以及标识符。随后,我们针对每个句子集对Llama2(Llama2-7B)进行微调,并使用一系列句子提示和各种表型术语进行评估。

结果

当用于归一化的表型术语包含在微调语料库中时,两个模型均表现出近乎完美的性能,平均准确率超过99%。相比之下,ChatGPT-3.5在识别表型术语的HPO ID方面的准确率仅约为20%。当在表型术语中引入单字符拼写错误时,NAME和NAME+SYN的准确率分别为10.2%和36.1%,但通过额外的特定拼写错误微调后,NAME+SYN的准确率提高到61.8%。对于作为未见同义词从HPO词汇表中获取的术语,NAME模型的准确率为11.2%,而NAME+SYN模型的准确率为92.7%。

结论

我们的微调模型展示了对微调语料库中未见的表型术语进行归一化的能力,包括拼写错误、同义词、来自其他本体的术语和外行术语。我们的方法为使用大语言模型从临床叙述中识别命名医学实体并将其成功归一化为受控词汇表中的标准概念提供了一种解决方案。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/318e/11181428/1264cdc3474e/nihpp-2023.12.28.573586v3-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验