School of Computer Science and Technology, Shandong University of Technology, China.
School of Computer Science and Technology, Shandong University of Technology, China.
Neural Netw. 2024 Nov;179:106570. doi: 10.1016/j.neunet.2024.106570. Epub 2024 Jul 24.
Sequential recommendation typically utilizes deep neural networks to mine rich information in interaction sequences. However, existing methods often face the issue of insufficient interaction data. To alleviate the sparsity issue, self-supervised learning is introduced into sequential recommendation. Despite its effectiveness, we argue that current self-supervised learning-based (i.e., SSL-based) sequential recommendation models have the following limitations: (1) using only a single self-supervised learning method, either contrastive self-supervised learning or generative self-supervised learning. (2) employing a simple data augmentation strategy in either the graph structure domain or the node feature domain. We believe that they have not fully utilized the capabilities of both self-supervised methods and have not sufficiently explored the advantages of combining graph augmentation schemes. As a result, they often fail to learn better item representations. In light of this, we propose a novel multi-task sequential recommendation framework named Adaptive Self-supervised Learning for sequential Recommendation (ASLRec). Specifically, our framework combines contrastive and generative self-supervised learning methods adaptively, simultaneously applying different perturbations at both the graph topology and node feature levels. This approach constructs diverse augmented graph views and employs multiple loss functions (including contrastive loss, generative loss, mask loss, and prediction loss) for joint training. By encompassing the capabilities of various methods, our model learns item representations across different augmented graph views to achieve better performance and effectively mitigate interaction noise and sparsity. In addition, we add a small proportion of random uniform noise to item representations, making the item representations more uniform and mitigating the inherent popularity bias in interaction records. We conduct extensive experiments on three publicly available benchmark datasets to evaluate our model. The results demonstrate that our approach achieves state-of-the-art performance compared to 14 other competitive methods: the hit rate (HR) improved by over 14.39%, and the normalized discounted cumulative gain (NDCG) increased by over 18.67%.
序列推荐通常使用深度神经网络来挖掘交互序列中的丰富信息。然而,现有的方法经常面临交互数据不足的问题。为了解决稀疏性问题,引入了自监督学习到序列推荐中。尽管它很有效,但我们认为,目前基于自监督学习的(即基于 SSL 的)序列推荐模型存在以下局限性:(1)仅使用单一的自监督学习方法,要么是对比自监督学习,要么是生成式自监督学习。(2)在图结构域或节点特征域中采用简单的数据增强策略。我们认为,它们没有充分利用自监督方法的能力,也没有充分探索结合图增强方案的优势。因此,它们往往无法学习更好的项目表示。有鉴于此,我们提出了一种新的多任务序列推荐框架,名为自适应自监督学习序列推荐(ASLRec)。具体来说,我们的框架自适应地结合了对比和生成式自监督学习方法,同时在图拓扑和节点特征层面应用不同的扰动。这种方法构建了多样化的增强图视图,并采用多个损失函数(包括对比损失、生成损失、掩蔽损失和预测损失)进行联合训练。通过包含各种方法的能力,我们的模型学习了不同增强图视图中的项目表示,以实现更好的性能,并有效地减轻交互噪声和稀疏性。此外,我们向项目表示中添加一小部分随机均匀噪声,使项目表示更加均匀,并减轻交互记录中固有的流行度偏见。我们在三个公开可用的基准数据集上进行了广泛的实验,以评估我们的模型。结果表明,与其他 14 种竞争方法相比,我们的方法取得了最先进的性能:点击率(HR)提高了 14.39%以上,归一化折扣累积增益(NDCG)提高了 18.67%以上。