Suppr超能文献

知识图谱与预训练语言模型增强的对话推荐系统表示学习

Knowledge Graphs and Pretrained Language Models Enhanced Representation Learning for Conversational Recommender Systems.

作者信息

Qiu Zhangchi, Tao Ye, Pan Shirui, Liew Alan Wee-Chung

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):6107-6121. doi: 10.1109/TNNLS.2024.3395334. Epub 2025 Apr 4.

Abstract

Conversational recommender systems (CRSs) utilize natural language interactions and dialog history to infer user preferences and provide accurate recommendations. Due to the limited conversation context and background knowledge, existing CRSs rely on external sources such as knowledge graphs (KGs) to enrich the context and model entities based on their interrelations. However, these methods ignore the rich intrinsic information within entities. To address this, we introduce the knowledge-enhanced entity representation learning (KERL) framework, which leverages both the KG and a pretrained language model (PLM) to improve the semantic understanding of entities for CRS. In our KERL framework, entity textual descriptions are encoded via a PLM, while a KG helps reinforce the representation of these entities. We also employ positional encoding to effectively capture the temporal information of entities in a conversation. The enhanced entity representation is then used to develop a recommender component that fuses both entity and contextual representations for more informed recommendations, as well as a dialog component that generates informative entity-related information in the response text. A high-quality KG with aligned entity descriptions is constructed to facilitate this study, namely, the Wiki Movie Knowledge Graph (WikiMKG). The experimental results show that KERL achieves state-of-the-art results in both recommendation and response generation tasks. Our code is publicly available at the link: https://github.com/icedpanda/KERL.

摘要

对话式推荐系统(CRS)利用自然语言交互和对话历史来推断用户偏好并提供准确的推荐。由于对话上下文和背景知识有限,现有的CRS依赖于知识图谱(KG)等外部资源来丰富上下文并基于实体间的相互关系对实体进行建模。然而,这些方法忽略了实体内部丰富的内在信息。为了解决这个问题,我们引入了知识增强实体表示学习(KERL)框架,该框架利用知识图谱和预训练语言模型(PLM)来提高CRS对实体的语义理解。在我们的KERL框架中,实体文本描述通过PLM进行编码,而知识图谱则有助于强化这些实体的表示。我们还采用位置编码来有效捕捉对话中实体的时间信息。然后,增强后的实体表示用于开发一个推荐组件,该组件融合实体和上下文表示以进行更明智的推荐,以及一个对话组件,该组件在响应文本中生成与实体相关的信息丰富的信息。为了便于这项研究,构建了一个具有对齐实体描述的高质量知识图谱,即维基电影知识图谱(WikiMKG)。实验结果表明,KERL在推荐和响应生成任务中均取得了领先的成果。我们的代码可在以下链接公开获取:https://github.com/icedpanda/KERL。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验