Guo Jianhua, Yin Zhixiang, Feng Shuyang, Yao Donglin, Liu Shaopeng
School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, Guangdong, China.
School of Education, Guangzhou University, Guangzhou, Guangdong, China.
Sci Rep. 2025 Jan 16;15(1):2133. doi: 10.1038/s41598-025-86416-x.
Knowledge-aware recommendation systems often face challenges owing to sparse supervision signals and redundant entity relations, which can diminish the advantages of utilizing knowledge graphs for enhancing recommendation performance. To tackle these challenges, we propose a novel recommendation model named Dual-Intent-View Contrastive Learning network (DIVCL), inspired by recent advancements in contrastive and intent learning. DIVCL employs a dual-view representation learning approach using Graph Neural Networks (GNNs), consisting of two distinct views: a local view based on the user-item interaction graph and a global view based on the user-item-entity knowledge graph. To further enhance learning, a set of intents are integrated into each user-item interaction as a separate class of nodes, fulfilling three crucial roles in the GNN learning process: (1) providing fine-grained representations of user-item interaction features, (2) acting as evaluators for filtering relevant relations in the knowledge graph, and (3) participating in contrastive learning to strengthen the model's ability to handle sparse signals and redundant relations. Experimental results on three benchmark datasets demonstrate that DIVCL outperforms state-of-the-art models, showcasing its superior performance. The implementation is available at: https://github.com/yzxx667/DIVCL .
基于知识的推荐系统由于监督信号稀疏和实体关系冗余而常常面临挑战,这可能会削弱利用知识图谱提升推荐性能的优势。为应对这些挑战,我们受对比学习和意图学习的最新进展启发,提出了一种名为双意图视图对比学习网络(DIVCL)的新型推荐模型。DIVCL采用基于图神经网络(GNN)的双视图表示学习方法,由两个不同的视图组成:基于用户-物品交互图的局部视图和基于用户-物品-实体知识图的全局视图。为进一步加强学习,一组意图被整合到每个用户-物品交互中作为单独的节点类别,在GNN学习过程中发挥三个关键作用:(1)提供用户-物品交互特征的细粒度表示;(2)作为评估器来过滤知识图谱中的相关关系;(3)参与对比学习以增强模型处理稀疏信号和冗余关系的能力。在三个基准数据集上的实验结果表明,DIVCL优于现有模型,展现出其卓越的性能。实现代码可在以下网址获取:https://github.com/yzxx667/DIVCL 。