Suppr超能文献

用于持续关系抽取的标签引导关系原型生成

Label-Guided relation prototype generation for Continual Relation Extraction.

作者信息

Liu Shuang, Chen XunQin, Chen Peng, Kolmanič Simon

机构信息

School of Computer Science and Engineering, Dalian Minzu University, Dalian, China.

School of Computer and Software, Dalian Neusoft University of Information, Dalian, China.

出版信息

PeerJ Comput Sci. 2024 Oct 8;10:e2327. doi: 10.7717/peerj-cs.2327. eCollection 2024.

Abstract

Continual relation extraction (CRE) aims to extract relations towards the continuous and iterative arrival of new data. To address the problem of catastrophic forgetting, some existing research endeavors have focused on exploring memory replay methods by storing typical historical learned instances or embedding all observed relations as prototypes by averaging the hidden representation of samples and replaying them in the subsequent training process. However, this prototype generation method overlooks the rich semantic information within the label namespace and are also constrained by memory size, resulting in inadequate descriptions of relation semantics by relation prototypes. To this end, we introduce an approach termed Label-Guided Relation Prototype Generation. Initially, we enhance the representations of label embeddings through a technique named label knowledge infusion. Following that, we utilize the multi-head attention mechanism to form relation prototypes, allowing them to capture diverse aspects of typical instances. The embeddings of relation labels are utilized at this stage, leveraging their contained semantics. Additionally, we propose a feature-based distillation loss function called multi-similarity distillation, to ensure the model retains prior knowledge after learning new tasks. The experimental results indicate that our method has achieved competitive performance compared to the state-of-the-art baseline models in CRE.

摘要

持续关系提取(CRE)旨在提取与新数据持续迭代到达相关的关系。为了解决灾难性遗忘问题,一些现有研究致力于通过存储典型的历史学习实例或通过对样本的隐藏表示进行平均将所有观察到的关系嵌入为原型,并在后续训练过程中重放它们来探索记忆重放方法。然而,这种原型生成方法忽略了标签命名空间内丰富的语义信息,并且还受到内存大小的限制,导致关系原型对关系语义的描述不足。为此,我们引入了一种称为标签引导关系原型生成的方法。首先,我们通过一种名为标签知识注入的技术增强标签嵌入的表示。随后,我们利用多头注意力机制形成关系原型,使其能够捕捉典型实例的不同方面。在此阶段利用关系标签的嵌入,利用其包含的语义。此外,我们提出了一种基于特征的蒸馏损失函数,称为多相似性蒸馏,以确保模型在学习新任务后保留先验知识。实验结果表明,与CRE中最新的基线模型相比,我们的方法取得了具有竞争力的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ede/11622975/a8fe1141928c/peerj-cs-10-2327-g001.jpg

相似文献

2
Improving few-shot relation extraction through semantics-guided learning.通过语义引导学习提高小样本关系抽取。
Neural Netw. 2024 Jan;169:453-461. doi: 10.1016/j.neunet.2023.10.053. Epub 2023 Nov 3.
3
StaRS: Learning a Stable Representation Space for Continual Relation Classification.
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9670-9683. doi: 10.1109/TNNLS.2024.3442236. Epub 2025 May 2.
4
Prototype-Guided Memory Replay for Continual Learning.用于持续学习的原型引导记忆回放
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10973-10983. doi: 10.1109/TNNLS.2023.3246049. Epub 2024 Aug 5.
8
Document-level Relation Extraction with Relation Correlations.基于关系相关性的文档级关系抽取
Neural Netw. 2024 Mar;171:14-24. doi: 10.1016/j.neunet.2023.11.062. Epub 2023 Nov 30.

本文引用的文献

1
Overcoming catastrophic forgetting in neural networks.克服神经网络中的灾难性遗忘。
Proc Natl Acad Sci U S A. 2017 Mar 28;114(13):3521-3526. doi: 10.1073/pnas.1611835114. Epub 2017 Mar 14.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验