Suppr超能文献

KBPT:用于零样本关系三元组提取的基于知识的提示调整

KBPT: knowledge-based prompt tuning for zero-shot relation triplet extraction.

作者信息

Guo Qian, Guo Yi, Zhao Jin

机构信息

Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, China.

School of Computer Science, Fudan University, Shanghai, China.

出版信息

PeerJ Comput Sci. 2024 May 24;10:e2014. doi: 10.7717/peerj-cs.2014. eCollection 2024.

Abstract

Knowledge representation is increasingly recognized as an effective method for information extraction. Nevertheless, numerous studies have disregarded its potential applications in the zero-shot setting. In this article, a novel framework, called knowledge-based prompt tuning for zero-shot relation triplet extraction (KBPT), was developed, founded on external ontology knowledge. This framework serves as a catalyst for exploring relation triplet extraction (RTE) methods within low-resource scenarios, warranting further scrutiny. Zero-shot setting RTE aims to extract multiple triplets that consist of head entities, tail entities, and relation labels from an input sentence, where the extracted relation labels are those that do not exist in the training set. To address the data scarcity problem in zero-shot RTE, a technique was introduced to synthesize training samples by prompting language models to generate structured texts. Specifically, this involves integrating language model prompts with structured text methodologies to create a structured prompt template. This template draws upon relation labels and ontology knowledge to generate synthetic training examples. The incorporation of external ontological knowledge enriches the semantic representation within the prompt template, enhancing its effectiveness. Further, a multiple triplets decoding (MTD) algorithm was developed to overcome the challenge of extracting multiple relation triplets from a sentence. To bridge the gap between knowledge and text, a collective training method was established to jointly optimize embedding representations. The proposed model is model-agnostic and can be applied to various PLMs. Exhaustive experiments on four public datasets with zero-shot settings were conducted to demonstrate the effectiveness of the proposed method. Compared to the baseline models, KBPT demonstrated enhancements of up to 14.65% and 24.19% in F1 score on the Wiki-ZSL and TACRED-Revisit datasets, respectively. Moreover, the proposed model achieved better performance compared with the current state-of-the-art (SOTA) model in terms of F1 score, precision-recall (P-R) curves and AUC. The code is available at https://Github.com/Phevos75/KBPT.

摘要

知识表示越来越被认为是一种有效的信息提取方法。然而,众多研究忽视了其在零样本设置中的潜在应用。在本文中,基于外部本体知识,开发了一种名为基于知识的零样本关系三元组提取提示调优(KBPT)的新颖框架。该框架是探索低资源场景下关系三元组提取(RTE)方法的催化剂,值得进一步研究。零样本设置RTE旨在从输入句子中提取由头实体、尾实体和关系标签组成的多个三元组,其中提取的关系标签是训练集中不存在的。为了解决零样本RTE中的数据稀缺问题,引入了一种通过提示语言模型生成结构化文本以合成训练样本的技术。具体而言,这涉及将语言模型提示与结构化文本方法相结合,以创建结构化提示模板。该模板利用关系标签和本体知识来生成合成训练示例。外部本体知识的纳入丰富了提示模板中的语义表示,提高了其有效性。此外,还开发了一种多三元组解码(MTD)算法,以克服从句子中提取多个关系三元组的挑战。为了弥合知识与文本之间的差距,建立了一种集体训练方法来联合优化嵌入表示。所提出的模型与模型无关,可应用于各种预训练语言模型(PLM)。在四个具有零样本设置的公共数据集上进行了详尽的实验,以证明所提出方法的有效性。与基线模型相比,KBPT在Wiki-ZSL和TACRED-Revisit数据集上的F1分数分别提高了高达14.65%和24.19%。此外,在所提出的模型在F1分数、精确率-召回率(P-R)曲线和AUC方面与当前的最先进(SOTA)模型相比表现更好。代码可在https://Github.com/Phevos75/KBPT获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08d5/11157611/f1b9bfa51ed0/peerj-cs-10-2014-g001.jpg

相似文献

1
KBPT: knowledge-based prompt tuning for zero-shot relation triplet extraction.
PeerJ Comput Sci. 2024 May 24;10:e2014. doi: 10.7717/peerj-cs.2014. eCollection 2024.
3
Prompt Tuning in Biomedical Relation Extraction.
J Healthc Inform Res. 2024 Feb 29;8(2):206-224. doi: 10.1007/s41666-024-00162-9. eCollection 2024 Jun.
4
HRCL: Hierarchical Relation Contrastive Learning for Low-Resource Relation Extraction.
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):7263-7276. doi: 10.1109/TNNLS.2024.3386611. Epub 2025 Apr 4.
5
TGIN: Translation-Based Graph Inference Network for Few-Shot Relational Triplet Extraction.
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):9147-9161. doi: 10.1109/TNNLS.2022.3218981. Epub 2024 Jul 8.
7
A co-adaptive duality-aware framework for biomedical relation extraction.
Bioinformatics. 2023 May 4;39(5). doi: 10.1093/bioinformatics/btad301.
8
Multi-view graph representation with similarity diffusion for general zero-shot learning.
Neural Netw. 2023 Sep;166:38-50. doi: 10.1016/j.neunet.2023.06.045. Epub 2023 Jul 7.
9
HealthPrompt: A Zero-shot Learning Paradigm for Clinical Natural Language Processing.
AMIA Annu Symp Proc. 2023 Apr 29;2022:972-981. eCollection 2022.
10
HCL: A Hierarchical Contrastive Learning Framework for Zero-Shot Relation Extraction.
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5694-5705. doi: 10.1109/TNNLS.2024.3379527. Epub 2025 Feb 28.

本文引用的文献

1
Deep learning in neural networks: an overview.
Neural Netw. 2015 Jan;61:85-117. doi: 10.1016/j.neunet.2014.09.003. Epub 2014 Oct 13.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验