Suppr超能文献

重新排序知识图谱补全(ReranKGC):一种用于多模态知识图谱补全的协同检索与重新排序框架。

ReranKGC: A cooperative retrieve-and-rerank framework for multi-modal knowledge graph completion.

作者信息

Gao Meng, Xie Yutao, Chen Wei, Zhang Feng, Ding Fei, Wang Tengjiao, Yao Jiahui, Zheng Jiabin, Wong Kam-Fai

机构信息

Key Lab of High Confidence Software Technologies (MOE), School of Computer Science, Peking University, Beijing, China; Research Center for Computational Social Science, Peking University, Beijing, China; Institute of Computational Social Science, Peking University (Qingdao), Qingdao, China.

Key Lab of High Confidence Software Technologies (MOE), School of Computer Science, Peking University, Beijing, China; Research Center for Computational Social Science, Peking University, Beijing, China; Institute of Computational Social Science, Peking University (Qingdao), Qingdao, China.

出版信息

Neural Netw. 2025 Aug;188:107467. doi: 10.1016/j.neunet.2025.107467. Epub 2025 Apr 12.

Abstract

Multi-modal knowledge graph completion (MMKGC) aims to predict missing links using entity's multi-modal attributes. Embedding-based methods excel in leveraging structural knowledge, making them robust to entity ambiguity, yet their performance is constrained by the underutilization of multi-modal knowledge. Conversely, fine-tune-based (FT-based) approaches excel in extracting multi-modal knowledge but are hindered by ambiguity issues. To harness the complementary strengths of both methods for MMKGC, this paper introduces an ensemble framework ReranKGC, which decomposes KGC to a retrieve-and-rerank pipeline. The retriever employs embedding-based methods for initial retrieval. The re-ranker adopts our proposed KGC-CLIP, an FT-based method that utilizes CLIP to extract multi-modal knowledge from attributes for candidate re-ranking. By leveraging a more comprehensive knowledge source, the retriever generates a candidate pool containing entities not only semantically, but also structurally related to the query entity. Within this higher-quality candidate pool, the re-ranker can better discern candidates' semantics to further refine the initial ranking, thereby enhancing precision. Through cooperation, each method maximizes its strengths while mitigating the weaknesses of others to a certain extent, leading to superior performance that surpasses individual capabilities. Extensive experiments conducted on link prediction tasks demonstrate that our framework ReranKGC consistently enhances baseline performance, outperforming state-of-the-art models.

摘要

多模态知识图谱补全(MMKGC)旨在利用实体的多模态属性预测缺失的链接。基于嵌入的方法在利用结构知识方面表现出色,使其对实体歧义具有鲁棒性,但其性能受到多模态知识利用不足的限制。相反,基于微调(FT)的方法在提取多模态知识方面表现出色,但受到歧义问题的阻碍。为了利用这两种方法在MMKGC中的互补优势,本文介绍了一种集成框架ReranKGC,它将知识图谱补全分解为一个检索和重新排序的管道。检索器采用基于嵌入的方法进行初始检索。重新排序器采用我们提出的KGC-CLIP,这是一种基于FT的方法,它利用CLIP从属性中提取多模态知识用于候选重新排序。通过利用更全面的知识源,检索器生成一个候选池,其中包含不仅在语义上,而且在结构上与查询实体相关的实体。在这个高质量的候选池中,重新排序器可以更好地辨别候选者的语义,以进一步优化初始排名,从而提高精度。通过合作,每种方法都能最大限度地发挥自身优势,同时在一定程度上减轻其他方法的弱点,从而带来超越个体能力的卓越性能。在链接预测任务上进行的大量实验表明,我们的框架ReranKGC持续提高基线性能,优于现有最先进的模型。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验