Suppr超能文献

利用大型语言模型的零样本和少样本学习能力进行监管研究。

Harnessing large language models' zero-shot and few-shot learning capabilities for regulatory research.

机构信息

Division of Applied Regulatory Science, Office of Clinical Pharmacology, Office of Translational Sciences, Center for Drug Evaluation and Research, U.S. Food and Drug Administration, WO Bldg 64, 10903 New Hampshire Ave, Silver Spring, MD 20993, United States.

出版信息

Brief Bioinform. 2024 Jul 25;25(5). doi: 10.1093/bib/bbae354.

Abstract

Large language models (LLMs) are sophisticated AI-driven models trained on vast sources of natural language data. They are adept at generating responses that closely mimic human conversational patterns. One of the most notable examples is OpenAI's ChatGPT, which has been extensively used across diverse sectors. Despite their flexibility, a significant challenge arises as most users must transmit their data to the servers of companies operating these models. Utilizing ChatGPT or similar models online may inadvertently expose sensitive information to the risk of data breaches. Therefore, implementing LLMs that are open source and smaller in scale within a secure local network becomes a crucial step for organizations where ensuring data privacy and protection has the highest priority, such as regulatory agencies. As a feasibility evaluation, we implemented a series of open-source LLMs within a regulatory agency's local network and assessed their performance on specific tasks involving extracting relevant clinical pharmacology information from regulatory drug labels. Our research shows that some models work well in the context of few- or zero-shot learning, achieving performance comparable, or even better than, neural network models that needed thousands of training samples. One of the models was selected to address a real-world issue of finding intrinsic factors that affect drugs' clinical exposure without any training or fine-tuning. In a dataset of over 700 000 sentences, the model showed a 78.5% accuracy rate. Our work pointed to the possibility of implementing open-source LLMs within a secure local network and using these models to perform various natural language processing tasks when large numbers of training examples are unavailable.

摘要

大型语言模型(LLMs)是一种基于大量自然语言数据训练而成的复杂 AI 驱动模型。它们擅长生成与人类对话模式非常相似的回复。OpenAI 的 ChatGPT 是最著名的例子之一,已在多个领域得到广泛应用。尽管它们具有灵活性,但由于大多数用户必须将数据传输到运营这些模型的公司服务器,因此面临一个重大挑战。在线使用 ChatGPT 或类似模型可能会无意中将敏感信息暴露在数据泄露的风险中。因此,对于那些将数据隐私和保护视为首要任务的组织(如监管机构)来说,实施开源的、规模较小的 LLM,并将其置于安全的本地网络中,成为至关重要的一步。作为可行性评估,我们在一个监管机构的本地网络中实现了一系列开源 LLM,并评估了它们在特定任务上的性能,这些任务涉及从监管药物标签中提取相关临床药理学信息。我们的研究表明,在少量或零样本学习的情况下,某些模型的性能良好,甚至可以与需要数千个训练样本的神经网络模型相媲美。我们选择了其中一个模型来解决一个实际问题,即在没有任何训练或微调的情况下,找出影响药物临床暴露的内在因素。在一个超过 70 万条句子的数据集上,该模型的准确率达到了 78.5%。我们的工作表明,在安全的本地网络中实施开源 LLM 并使用这些模型在无法获得大量训练示例的情况下执行各种自然语言处理任务是可能的。

相似文献

本文引用的文献

8
A guide to deep learning in healthcare.深度学习在医疗保健中的应用指南。
Nat Med. 2019 Jan;25(1):24-29. doi: 10.1038/s41591-018-0316-z. Epub 2019 Jan 7.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验