Feng Siling, Chen Bolin, Liu Qian, Huang Mengxing
School of Information and Communication Engineering, Hainan University, Haikou, 570228, China.
Sci Rep. 2025 May 27;15(1):18474. doi: 10.1038/s41598-025-00314-w.
Temporal knowledge graph reasoning(TKGR) has attracted widespread attention due to its ability to handle dynamic temporal features. However, existing methods face three major challenges: (1) the difficulty of capturing long-distance dependencies in information sparse environments; (2) the problem of noise interference; (3) the complexity of modeling temporal relationships. These seriously impact the accuracy and robustness of reasoning. To address these challenges, we proposes a framework based on Dual-gate and Noise-aware Contrastive Learning (DNCL) to improve the performance of TKGR. The framework consists of three core modules: (1) We employ a multi-dimensional gated update module, which flexibly selects key information and suppresses redundant information through a dual-gate mechanism, thereby alleviating the long-distance dependencies problem; (2) We construct a noise-aware adversarial modeling module, which improves robustness and reduces the impact of noise through adversarial training; (3) We design a multi-layer embedding contrastive learning module, which enhances the representation ability through intra-layer and inter-layer contrastive learning strategies to better capture the latent relationships in the temporal dimension. Experimental results on four benchmark datasets show that the DNCL model is better than the current methods, especially for ICEWS14, ICEWS05-15 and ICEWS18 datasets, Hit@1 has improved by 6.91%, 4.31% and 5.30% respectively.
时态知识图谱推理(TKGR)因其处理动态时态特征的能力而受到广泛关注。然而,现有方法面临三大挑战:(1)在信息稀疏环境中捕捉长距离依赖的困难;(2)噪声干扰问题;(3)时态关系建模的复杂性。这些严重影响了推理的准确性和鲁棒性。为应对这些挑战,我们提出了一种基于双门控和噪声感知对比学习(DNCL)的框架来提高TKGR的性能。该框架由三个核心模块组成:(1)我们采用了一个多维门控更新模块,它通过双门控机制灵活地选择关键信息并抑制冗余信息,从而缓解长距离依赖问题;(2)我们构建了一个噪声感知对抗建模模块,它通过对抗训练提高鲁棒性并减少噪声的影响;(3)我们设计了一个多层嵌入对比学习模块,它通过层内和层间对比学习策略增强表示能力,以更好地捕捉时态维度中的潜在关系。在四个基准数据集上的实验结果表明,DNCL模型优于当前方法,特别是对于ICEWS14、ICEWS05 - 15和ICEWS18数据集,Hit@1分别提高了6.91%、4.31%和5.30%。