• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

StaRS: Learning a Stable Representation Space for Continual Relation Classification.

作者信息

Pang Ning, Zhao Xiang, Zeng Weixin, Tan Zhen, Xiao Weidong

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9670-9683. doi: 10.1109/TNNLS.2024.3442236. Epub 2025 May 2.

DOI:10.1109/TNNLS.2024.3442236
PMID:39178074
Abstract

Relation classification (RC) aims to detect the semantic relation between two annotated entities in a piece of sentence, serving as an essential task in automatic knowledge graph construction. Due to the emergence of new relations, there is a recent trend to train RC models in continual settings. To overcome the catastrophic forgetting problem in continual learning, existing research is devoted in a two-stage training paradigm, fast adaptation to novel relations, and memory replay for all historical relations. These memory-replay-based methods explore different techniques to mitigate the forgetting problem of continual RC (CRC) models during the memory replay stage. However, we find that the representation space undergoes distortion due to the incoming of fresh relations in the fast adaptation phase. To address this issue, we propose using a knowledge distillation strategy and designing a margin loss, aiming to maintain the stability of the RC model during adaptation to new relations. In addition, in the second stage, with a limited number of typical memory instances available, we introduce a self-contrastive learning objective to facilitate learning a balanced decision boundary for RC. Through training in two stages, our objective is to acquire a stable representation space to encode instances for CRC. We experimentally demonstrate the superiority of our model over competing methods in various settings, and the results suggest that our tailored designs can achieve better performance in CRC.

摘要

相似文献

1
StaRS: Learning a Stable Representation Space for Continual Relation Classification.
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9670-9683. doi: 10.1109/TNNLS.2024.3442236. Epub 2025 May 2.
2
Label-Guided relation prototype generation for Continual Relation Extraction.用于持续关系抽取的标签引导关系原型生成
PeerJ Comput Sci. 2024 Oct 8;10:e2327. doi: 10.7717/peerj-cs.2327. eCollection 2024.
3
Prototype-Guided Memory Replay for Continual Learning.用于持续学习的原型引导记忆回放
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10973-10983. doi: 10.1109/TNNLS.2023.3246049. Epub 2024 Aug 5.
4
Rethinking exemplars for continual semantic segmentation in endoscopy scenes: Entropy-based mini-batch pseudo-replay.重新思考内窥镜场景中持续语义分割的范例:基于熵的小批量伪重放。
Comput Biol Med. 2023 Oct;165:107412. doi: 10.1016/j.compbiomed.2023.107412. Epub 2023 Aug 30.
5
Boosting Knowledge Base Automatically via Few-Shot Relation Classification.通过少样本关系分类自动增强知识库
Front Neurorobot. 2020 Oct 27;14:584192. doi: 10.3389/fnbot.2020.584192. eCollection 2020.
6
Subspace distillation for continual learning.用于持续学习的子空间蒸馏
Neural Netw. 2023 Oct;167:65-79. doi: 10.1016/j.neunet.2023.07.047. Epub 2023 Aug 6.
7
Continual Learning With Knowledge Distillation: A Survey.基于知识蒸馏的持续学习:一项综述。
IEEE Trans Neural Netw Learn Syst. 2024 Oct 18;PP. doi: 10.1109/TNNLS.2024.3476068.
8
Catastrophic Forgetting in Deep Graph Networks: A Graph Classification Benchmark.深度图网络中的灾难性遗忘:一个图分类基准
Front Artif Intell. 2022 Feb 4;5:824655. doi: 10.3389/frai.2022.824655. eCollection 2022.
9
Tf-GCZSL: Task-free generalized continual zero-shot learning.无任务广义持续零样本学习(Tf-GCZSL)。
Neural Netw. 2022 Nov;155:487-497. doi: 10.1016/j.neunet.2022.08.034. Epub 2022 Sep 6.
10
Map-based experience replay: a memory-efficient solution to catastrophic forgetting in reinforcement learning.基于映射的经验回放:强化学习中灾难性遗忘的一种内存高效解决方案。
Front Neurorobot. 2023 Jun 27;17:1127642. doi: 10.3389/fnbot.2023.1127642. eCollection 2023.

引用本文的文献

1
ERNIE-UIE: Advancing information extraction in Chinese medical knowledge graph.ERNIE-UIE:推进中文医学知识图谱中的信息提取
PLoS One. 2025 May 29;20(5):e0325082. doi: 10.1371/journal.pone.0325082. eCollection 2025.