Guo Qian, Guo Yi, Zhao Jin
Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, China.
School of Computer Science, Fudan University, Shanghai, China.
PeerJ Comput Sci. 2024 May 24;10:e2014. doi: 10.7717/peerj-cs.2014. eCollection 2024.
Knowledge representation is increasingly recognized as an effective method for information extraction. Nevertheless, numerous studies have disregarded its potential applications in the zero-shot setting. In this article, a novel framework, called knowledge-based prompt tuning for zero-shot relation triplet extraction (KBPT), was developed, founded on external ontology knowledge. This framework serves as a catalyst for exploring relation triplet extraction (RTE) methods within low-resource scenarios, warranting further scrutiny. Zero-shot setting RTE aims to extract multiple triplets that consist of head entities, tail entities, and relation labels from an input sentence, where the extracted relation labels are those that do not exist in the training set. To address the data scarcity problem in zero-shot RTE, a technique was introduced to synthesize training samples by prompting language models to generate structured texts. Specifically, this involves integrating language model prompts with structured text methodologies to create a structured prompt template. This template draws upon relation labels and ontology knowledge to generate synthetic training examples. The incorporation of external ontological knowledge enriches the semantic representation within the prompt template, enhancing its effectiveness. Further, a multiple triplets decoding (MTD) algorithm was developed to overcome the challenge of extracting multiple relation triplets from a sentence. To bridge the gap between knowledge and text, a collective training method was established to jointly optimize embedding representations. The proposed model is model-agnostic and can be applied to various PLMs. Exhaustive experiments on four public datasets with zero-shot settings were conducted to demonstrate the effectiveness of the proposed method. Compared to the baseline models, KBPT demonstrated enhancements of up to 14.65% and 24.19% in F1 score on the Wiki-ZSL and TACRED-Revisit datasets, respectively. Moreover, the proposed model achieved better performance compared with the current state-of-the-art (SOTA) model in terms of F1 score, precision-recall (P-R) curves and AUC. The code is available at https://Github.com/Phevos75/KBPT.
知识表示越来越被认为是一种有效的信息提取方法。然而,众多研究忽视了其在零样本设置中的潜在应用。在本文中,基于外部本体知识,开发了一种名为基于知识的零样本关系三元组提取提示调优(KBPT)的新颖框架。该框架是探索低资源场景下关系三元组提取(RTE)方法的催化剂,值得进一步研究。零样本设置RTE旨在从输入句子中提取由头实体、尾实体和关系标签组成的多个三元组,其中提取的关系标签是训练集中不存在的。为了解决零样本RTE中的数据稀缺问题,引入了一种通过提示语言模型生成结构化文本以合成训练样本的技术。具体而言,这涉及将语言模型提示与结构化文本方法相结合,以创建结构化提示模板。该模板利用关系标签和本体知识来生成合成训练示例。外部本体知识的纳入丰富了提示模板中的语义表示,提高了其有效性。此外,还开发了一种多三元组解码(MTD)算法,以克服从句子中提取多个关系三元组的挑战。为了弥合知识与文本之间的差距,建立了一种集体训练方法来联合优化嵌入表示。所提出的模型与模型无关,可应用于各种预训练语言模型(PLM)。在四个具有零样本设置的公共数据集上进行了详尽的实验,以证明所提出方法的有效性。与基线模型相比,KBPT在Wiki-ZSL和TACRED-Revisit数据集上的F1分数分别提高了高达14.65%和24.19%。此外,在所提出的模型在F1分数、精确率-召回率(P-R)曲线和AUC方面与当前的最先进(SOTA)模型相比表现更好。代码可在https://Github.com/Phevos75/KBPT获取。