Suppr超能文献

用于基于会话推荐的自监督全局上下文图神经网络

Self-supervised global context graph neural network for session-based recommendation.

作者信息

Chu Fei, Jia Caiyan

机构信息

School of Computer and Information Technology & Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, Beijing, China.

出版信息

PeerJ Comput Sci. 2022 Jul 28;8:e1055. doi: 10.7717/peerj-cs.1055. eCollection 2022.

Abstract

Session-based recommendation (SBR) aims to recommend the next items based on anonymous behavior sequences over a short period of time. Compared with other recommendation paradigms, the information available in SBR is very limited. Therefore, capturing the item relations across sessions is crucial for SBR. Recently, many methods have been proposed to learn article transformation relationships over all sessions. Despite their success, these methods may enlarge the impact of noisy interactions and ignore the complex high-order relationship between non-adjacent items. In this study, we propose a self-supervised global context graph neural network (SGC-GNN) to model high-order transition relations between items over all sessions by using virtual context vectors, each of which connects to all items in a given session and enables to collect and propagation information beyond adjacent items. Moreover, to improve the robustness of the proposed model, we devise a contrastive self-supervised learning (SSL) module as an auxiliary task to jointly learn more robust representations of the items in sessions and train the model to fulfill the SBR task. Experimental results on three benchmark datasets demonstrate the superiority of our model over the state-of-the-art (SOTA) methods and validate the effectiveness of context vectors and the self-supervised module.

摘要

基于会话的推荐(SBR)旨在根据短时间内的匿名行为序列推荐下一个项目。与其他推荐范式相比,SBR中可用的信息非常有限。因此,捕捉跨会话的项目关系对于SBR至关重要。最近,已经提出了许多方法来学习所有会话中的文章转换关系。尽管它们取得了成功,但这些方法可能会扩大噪声交互的影响,并忽略非相邻项目之间复杂的高阶关系。在本研究中,我们提出了一种自监督全局上下文图神经网络(SGC-GNN),通过使用虚拟上下文向量对所有会话中的项目之间的高阶转换关系进行建模,每个虚拟上下文向量连接到给定会话中的所有项目,并能够收集和传播相邻项目之外的信息。此外,为了提高所提出模型的鲁棒性,我们设计了一个对比自监督学习(SSL)模块作为辅助任务来联合学习会话中项目的更鲁棒表示,并训练模型以完成SBR任务。在三个基准数据集上的实验结果证明了我们的模型优于现有技术(SOTA)方法,并验证了上下文向量和自监督模块的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ba6e/9454781/523d9d3eb065/peerj-cs-08-1055-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验