Suppr超能文献

DSGRec:用于多模态推荐的双路径选择图

DSGRec: dual-path selection graph for multimodal recommendation.

作者信息

Liu Zihao, Qu Wen

机构信息

College of Computer Science and Technology, Dalian Martime University, Dalian, Liao Ning, China.

College of Computer Science and Artificial Intelligent, Liaoning Normal University, Dalian, Liao Ning, China.

出版信息

PeerJ Comput Sci. 2025 Apr 15;11:e2779. doi: 10.7717/peerj-cs.2779. eCollection 2025.

Abstract

With the advancement of digital streaming technology, multi-modal recommendation systems have gained significant attention. Current graph-based multi-modal recommendation approaches typically model user interests using either user interaction signals or multi-modal item information derived from heterogeneous graphs. Although methods based on graph convolutional networks (GCNs) have achieved notable success, they still face two key limitations: (1) the narrow interpretation of interaction information, leading to incomplete modeling of user behavior, and (2) a lack of fine-grained collaboration between user behavior and multi-modal information. To address these issues, we propose a novel method by decomposing interaction information into two distinct signal pathways, referred to as a dual-path selection architecture, named Dual-path Selective Graph Recommender (DSGRec). DSGRec is designed to deliver more accurate and personalized recommendations by facilitating the positive collaboration of interactive data and multi-modal information. To further enhance the represetation of these signals, we introduce two key components: (1) behavior-aware multimodal signal augmentation, which extract rich multimodal semantic information; and (b) hypergraph-guided cooperative signal enhancement, which captures hybrid global information. Our model learns dual-path selection signals a primary module and introduces two auxiliary modules to adjust these signals. We introduce independent contrastive learning tasks for the auxiliary signals, enabling DSGRec to explore the mechanisms behind feature embeddings from different perspectives. This approach ensures that each auxiliary module aligns with the user-item interaction view independently, calibrating its contribution based on historical interactions. Extensive experiments conducted on three benchmark datasets demonstrate the superiority of DSGRec over several state-of-the-art recommendation baselines, highlighting the effectiveness of our method.

摘要

随着数字流媒体技术的发展,多模态推荐系统受到了广泛关注。当前基于图的多模态推荐方法通常使用用户交互信号或从异构图中导出的多模态项目信息来建模用户兴趣。尽管基于图卷积网络(GCN)的方法取得了显著成功,但它们仍然面临两个关键限制:(1)交互信息的解释狭窄,导致用户行为建模不完整;(2)用户行为与多模态信息之间缺乏细粒度的协作。为了解决这些问题,我们提出了一种新颖的方法,将交互信息分解为两条不同的信号路径,称为双路径选择架构,即双路径选择性图推荐器(DSGRec)。DSGRec旨在通过促进交互数据和多模态信息的积极协作来提供更准确和个性化的推荐。为了进一步增强这些信号的表示,我们引入了两个关键组件:(1)行为感知多模态信号增强,它提取丰富的多模态语义信息;(2)超图引导的协作信号增强,它捕获混合全局信息。我们的模型学习双路径选择信号——一个主要模块,并引入两个辅助模块来调整这些信号。我们为辅助信号引入独立的对比学习任务,使DSGRec能够从不同角度探索特征嵌入背后的机制。这种方法确保每个辅助模块独立地与用户-项目交互视图对齐,根据历史交互校准其贡献。在三个基准数据集上进行的大量实验证明了DSGRec优于几个现有的推荐基线,突出了我们方法的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/642a/12190342/45d516596e23/peerj-cs-11-2779-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验