Zhang Zhigao, Zhang Hongmei, Zhang Zhifeng, Wang Bin
College of Computer Science and Technology, Inner Mongolia Minzu University, Tongliao, 028000, China.
School of Computer Science and Engineering, Northeastern University, Shenyang, 110169, China.
Sci Rep. 2024 Aug 21;14(1):19413. doi: 10.1038/s41598-024-66349-7.
Modeling user intention with limited evidence in short-term historical sequences is a major challenge in session recommendation. In this domain, research exploration extends from traditional methods to deep learning. However, most of them solely concentrate on the sequential dependence or pairwise relations within the session, disregarding the inherent consistency among items. Additionally, there is a lack of research on context adaptation in session intention learning. To this end, we propose a novel session-based model named C-HAN, which consists of two parallel modules: the context-embedded hypergraph attention network and self-attention. These modules are designed to capture the inherent consistency and sequential dependencies between items. In the hypergraph attention network module, the different types of interaction contexts are introduced to enhance the model's contextual awareness. Finally, the soft-attention mechanism efficiently integrates the two types of information, collaboratively constructing the representation of the session. Experimental validation on three real-world datasets demonstrates the superior performance of C-HAN compared to state-of-the-art methods. The results show that C-HAN achieves an average improvement of 6.55%, 5.91%, and 6.17% over the runner-up baseline method on Precision@K, Recall@K, and MRR evaluation metrics, respectively.
利用短期历史序列中的有限证据对用户意图进行建模是会话推荐中的一项重大挑战。在该领域,研究探索已从传统方法扩展到深度学习。然而,其中大多数方法仅关注会话内的序列依赖性或成对关系,而忽略了项目之间的内在一致性。此外,在会话意图学习中缺乏对上下文适应性的研究。为此,我们提出了一种名为C-HAN的新型基于会话的模型,它由两个并行模块组成:上下文嵌入超图注意力网络和自注意力。这些模块旨在捕捉项目之间的内在一致性和序列依赖性。在超图注意力网络模块中,引入了不同类型的交互上下文以增强模型的上下文感知能力。最后,软注意力机制有效地整合了这两种类型的信息,协同构建会话的表示。在三个真实世界数据集上的实验验证表明,与现有方法相比,C-HAN具有卓越的性能。结果表明,在Precision@K、Recall@K和MRR评估指标上,C-HAN分别比排名第二的基线方法平均提高了6.55%、5.91%和6.17%。