Suppr超能文献

PromptLink:利用大语言模型进行跨源生物医学概念链接。

PromptLink: Leveraging Large Language Models for Cross-Source Biomedical Concept Linking.

作者信息

Xie Yuzhang, Lu Jiaying, Ho Joyce, Nahab Fadi, Hu Xiao, Yang Carl

机构信息

Emory University, USA.

出版信息

Int ACM SIGIR Conf Res Dev Inf Retr. 2024 Jul;2024:2589-2593. doi: 10.1145/3626772.3657904. Epub 2024 Jul 11.

Abstract

Linking (aligning) biomedical concepts across diverse data sources enables various integrative analyses, but it is challenging due to the discrepancies in concept naming conventions. Various strategies have been developed to overcome this challenge, such as those based on string-matching rules, manually crafted thesauri, and machine learning models. However, these methods are constrained by limited prior biomedical knowledge and can hardly generalize beyond the limited amounts of rules, thesauri, or training samples. Recently, large language models (LLMs) have exhibited impressive results in diverse biomedical NLP tasks due to their unprecedentedly rich prior knowledge and strong zero-shot prediction abilities. However, LLMs suffer from issues including high costs, limited context length, and unreliable predictions. In this research, we propose PromptLink, a novel biomedical concept linking framework that leverages LLMs. It first employs a biomedical-specialized pre-trained language model to generate candidate concepts that can fit in the LLM context windows. Then it utilizes an LLM to link concepts through two-stage prompts, where the first-stage prompt aims to elicit the biomedical prior knowledge from the LLM for the concept linking task and the second-stage prompt enforces the LLM to reflect on its own predictions to further enhance their reliability. Empirical results on the concept linking task between two EHR datasets and an external biomedical KG demonstrate the effectiveness of PromptLink. Furthermore, PromptLink is a generic framework without reliance on additional prior knowledge, context, or training data, making it well-suited for concept linking across various types of data sources. The source code of this study is available at https://github.com/constantjxyz/PromptLink.

摘要

跨多种数据源链接(对齐)生物医学概念能够实现各种综合分析,但由于概念命名规范的差异,这具有挑战性。已经开发了各种策略来克服这一挑战,例如基于字符串匹配规则、人工构建的叙词表和机器学习模型的策略。然而,这些方法受到有限的先验生物医学知识的限制,并且很难超越有限数量的规则、叙词表或训练样本进行泛化。最近,大语言模型(LLMs)由于其前所未有的丰富先验知识和强大的零样本预测能力,在各种生物医学自然语言处理任务中展现出了令人印象深刻的结果。然而,大语言模型存在成本高、上下文长度有限和预测不可靠等问题。在本研究中,我们提出了PromptLink,这是一种利用大语言模型的新型生物医学概念链接框架。它首先采用一个生物医学专用的预训练语言模型来生成适合大语言模型上下文窗口的候选概念。然后,它利用大语言模型通过两阶段提示来链接概念,其中第一阶段提示旨在从大语言模型中引出用于概念链接任务的生物医学先验知识,第二阶段提示促使大语言模型反思其自身预测,以进一步提高其可靠性。在两个电子健康记录(EHR)数据集与一个外部生物医学知识图谱(KG)之间的概念链接任务上的实证结果证明了PromptLink的有效性。此外,PromptLink是一个通用框架,不依赖于额外的先验知识、上下文或训练数据,使其非常适合跨各种类型数据源的概念链接。本研究的源代码可在https://github.com/constantjxyz/PromptLink获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a57e/11867735/82de09210121/nihms-2058358-f0001.jpg

相似文献

1
PromptLink: Leveraging Large Language Models for Cross-Source Biomedical Concept Linking.PromptLink:利用大语言模型进行跨源生物医学概念链接。
Int ACM SIGIR Conf Res Dev Inf Retr. 2024 Jul;2024:2589-2593. doi: 10.1145/3626772.3657904. Epub 2024 Jul 11.

本文引用的文献

2
Large language models encode clinical knowledge.大语言模型编码临床知识。
Nature. 2023 Aug;620(7972):172-180. doi: 10.1038/s41586-023-06291-2. Epub 2023 Jul 12.
5
On the effectiveness of compact biomedical transformers.紧凑型生物医学变压器的有效性。
Bioinformatics. 2023 Mar 1;39(3). doi: 10.1093/bioinformatics/btad103.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验