Suppr超能文献

用于会话推荐的上下文嵌入超图注意力网络和自注意力

Context-embedded hypergraph attention network and self-attention for session recommendation.

作者信息

Zhang Zhigao, Zhang Hongmei, Zhang Zhifeng, Wang Bin

机构信息

College of Computer Science and Technology, Inner Mongolia Minzu University, Tongliao, 028000, China.

School of Computer Science and Engineering, Northeastern University, Shenyang, 110169, China.

出版信息

Sci Rep. 2024 Aug 21;14(1):19413. doi: 10.1038/s41598-024-66349-7.

Abstract

Modeling user intention with limited evidence in short-term historical sequences is a major challenge in session recommendation. In this domain, research exploration extends from traditional methods to deep learning. However, most of them solely concentrate on the sequential dependence or pairwise relations within the session, disregarding the inherent consistency among items. Additionally, there is a lack of research on context adaptation in session intention learning. To this end, we propose a novel session-based model named C-HAN, which consists of two parallel modules: the context-embedded hypergraph attention network and self-attention. These modules are designed to capture the inherent consistency and sequential dependencies between items. In the hypergraph attention network module, the different types of interaction contexts are introduced to enhance the model's contextual awareness. Finally, the soft-attention mechanism efficiently integrates the two types of information, collaboratively constructing the representation of the session. Experimental validation on three real-world datasets demonstrates the superior performance of C-HAN compared to state-of-the-art methods. The results show that C-HAN achieves an average improvement of 6.55%, 5.91%, and 6.17% over the runner-up baseline method on Precision@K, Recall@K, and MRR evaluation metrics, respectively.

摘要

利用短期历史序列中的有限证据对用户意图进行建模是会话推荐中的一项重大挑战。在该领域,研究探索已从传统方法扩展到深度学习。然而,其中大多数方法仅关注会话内的序列依赖性或成对关系,而忽略了项目之间的内在一致性。此外,在会话意图学习中缺乏对上下文适应性的研究。为此,我们提出了一种名为C-HAN的新型基于会话的模型,它由两个并行模块组成:上下文嵌入超图注意力网络和自注意力。这些模块旨在捕捉项目之间的内在一致性和序列依赖性。在超图注意力网络模块中,引入了不同类型的交互上下文以增强模型的上下文感知能力。最后,软注意力机制有效地整合了这两种类型的信息,协同构建会话的表示。在三个真实世界数据集上的实验验证表明,与现有方法相比,C-HAN具有卓越的性能。结果表明,在Precision@K、Recall@K和MRR评估指标上,C-HAN分别比排名第二的基线方法平均提高了6.55%、5.91%和6.17%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c392/11339270/95a806c6d44f/41598_2024_66349_Fig1_HTML.jpg

相似文献

2
DyCARS: A dynamic context-aware recommendation system.DyCARS:一种动态上下文感知推荐系统。
Math Biosci Eng. 2024 Feb 5;21(3):3563-3593. doi: 10.3934/mbe.2024157.
7
DHM-Net: Deep Hypergraph Modeling for Robust Feature Matching.DHM-Net:用于鲁棒特征匹配的深度超图建模
IEEE Trans Image Process. 2024;33:6002-6015. doi: 10.1109/TIP.2024.3477916. Epub 2024 Oct 22.
9
Heterogeneous Hypergraph Variational Autoencoder for Link Prediction.用于链路预测的异质超图变分自编码器
IEEE Trans Pattern Anal Mach Intell. 2022 Aug;44(8):4125-4138. doi: 10.1109/TPAMI.2021.3059313. Epub 2022 Jul 1.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验