Guo Qian, Guo Yi, Zhao Jin
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):7263-7276. doi: 10.1109/TNNLS.2024.3386611. Epub 2025 Apr 4.
Low-resource relation extraction (LRE) aims to extract the relationships between given entities from natural language sentences in low-resource application scenarios, which has been an incredibly challenging task due to the limited annotated corpora. Existing studies either leverage self-training schemes to expand the scale of labeled data, while the error accumulation of pseudo-labels' selection bias provoke the gradual drift problem in subsequent relation prediction, or utilize the instance-wise contrastive learning that fails to distinguish those sentence pairs with similar semantics. To alleviate these defects, this article introduces a novel contrastive learning framework called hierarchical relation contrastive learning (HRCL) for LRE. HRCL leverages task-related instruction description and schema-constrained as prompts to generate high-level relation representations. To enhance the efficacy of contrastive learning, we further employ hierarchical affinity propagation clustering (HiPC) to derive hierarchical signals from relational feature space with a hierarchy cross-attention (HCA) mechanism and effectively optimize pair-level relation features through relation-wise contrastive learning. Exhaustive experiments have been conducted on five public relation extraction (RE) datasets in low-resource settings. The results demonstrate the effectiveness and robustness of HRCL and outperform the current state-of-the-art (SOTA) model by 6.56% on average in terms of B3F1. Our source code is publicly available at https://github.com/Phevos75/HRCLRE.
低资源关系抽取(LRE)旨在从低资源应用场景中的自然语言句子中提取给定实体之间的关系,由于标注语料库有限,这一直是一项极具挑战性的任务。现有研究要么利用自训练方案来扩大标注数据的规模,而伪标签选择偏差的误差累积会在后续关系预测中引发渐进漂移问题,要么利用实例级对比学习,但这种方法无法区分语义相似的句子对。为了缓解这些缺陷,本文针对LRE引入了一种名为层次关系对比学习(HRCL)的新型对比学习框架。HRCL利用与任务相关的指令描述和模式约束作为提示来生成高级关系表示。为了提高对比学习的效果,我们进一步采用层次亲和传播聚类(HiPC),通过层次交叉注意力(HCA)机制从关系特征空间中导出层次信号,并通过关系级对比学习有效地优化对级关系特征。我们在低资源设置下的五个公共关系抽取(RE)数据集上进行了详尽的实验。结果证明了HRCL的有效性和鲁棒性,在B3F1方面平均比当前最先进的(SOTA)模型高出6.56%。我们的源代码可在https://github.com/Phevos75/HRCLRE上公开获取。