• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于知识图谱的思考:一种用于泛癌问答的知识图谱增强语言模型框架

Knowledge graph-based thought: a knowledge graph-enhanced LLM framework for pan-cancer question answering.

作者信息

Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue

机构信息

Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, 310024 Hangzhou, China.

Guangzhou National Laboratory, Guangzhou International Bio Island, 510005 Guangzhou, China.

出版信息

Gigascience. 2025 Jan 6;14. doi: 10.1093/gigascience/giae082.

DOI:10.1093/gigascience/giae082
PMID:39775838
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11702363/
Abstract

BACKGROUND

In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.

RESULTS

We developed the knowledge graph-based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug-cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.

CONCLUSIONS

The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.

摘要

背景

近年来,大语言模型(LLMs)在各个领域都展现出了潜力,尤其是在生物医学领域。然而,它们在现实世界中的应用常常受到错误输出和幻觉式回答等问题的限制。

结果

我们开发了基于知识图谱的思维(KGT)框架,这是一种创新的解决方案,它将大语言模型与知识图谱(KGs)相结合,通过利用来自知识图谱的可验证信息来改进其初始回答,从而显著减少推理中的事实性错误。KGT框架具有很强的适应性,在各种开源大语言模型上都表现良好。值得注意的是,KGT可以通过潜在的药物-癌症关联促进发现现有药物的新用途,并可以通过分析相关生物标志物和遗传机制来协助预测耐药性。为了评估生物医学领域内的知识图谱问答任务,我们利用一个泛癌知识图谱开发了一个泛癌问答基准,名为泛癌问答。

结论

KGT框架极大地提高了大语言模型在生物医学领域的准确性和实用性。本研究作为一个概念验证,展示了其在生物医学问答中的卓越性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ddd/11702363/32120cec4123/giae082fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ddd/11702363/5ebc45320055/giae082fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ddd/11702363/96f68041c9f0/giae082fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ddd/11702363/3c60b5e810ea/giae082fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ddd/11702363/32120cec4123/giae082fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ddd/11702363/5ebc45320055/giae082fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ddd/11702363/96f68041c9f0/giae082fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ddd/11702363/3c60b5e810ea/giae082fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ddd/11702363/32120cec4123/giae082fig4.jpg

相似文献

1
Knowledge graph-based thought: a knowledge graph-enhanced LLM framework for pan-cancer question answering.基于知识图谱的思考:一种用于泛癌问答的知识图谱增强语言模型框架
Gigascience. 2025 Jan 6;14. doi: 10.1093/gigascience/giae082.
2
Biomedical knowledge graph-optimized prompt generation for large language models.生物医学知识图谱优化的大语言模型提示生成。
Bioinformatics. 2024 Sep 2;40(9). doi: 10.1093/bioinformatics/btae560.
3
Improving Dietary Supplement Information Retrieval: Development of a Retrieval-Augmented Generation System With Large Language Models.改善膳食补充剂信息检索:利用大语言模型开发检索增强生成系统
J Med Internet Res. 2025 Mar 19;27:e67677. doi: 10.2196/67677.
4
Leveraging Medical Knowledge Graphs Into Large Language Models for Diagnosis Prediction: Design and Application Study.将医学知识图谱融入大语言模型进行诊断预测:设计与应用研究
JMIR AI. 2025 Feb 24;4:e58670. doi: 10.2196/58670.
5
KRAGEN: a knowledge graph-enhanced RAG framework for biomedical problem solving using large language models.KRAGEN:一种基于知识图谱增强的 RAG 框架,用于使用大型语言模型解决生物医学问题。
Bioinformatics. 2024 Jun 3;40(6). doi: 10.1093/bioinformatics/btae353.
6
Interpretable medical image Visual Question Answering via multi-modal relationship graph learning.基于多模态关系图学习的可解释医学图像视觉问答。
Med Image Anal. 2024 Oct;97:103279. doi: 10.1016/j.media.2024.103279. Epub 2024 Jul 20.
7
An Automatic and End-to-End System for Rare Disease Knowledge Graph Construction Based on Ontology-Enhanced Large Language Models: Development Study.基于本体增强大语言模型的罕见病知识图谱构建自动端到端系统:开发研究
JMIR Med Inform. 2024 Dec 18;12:e60665. doi: 10.2196/60665.
8
From Answers to Insights: Unveiling the Strengths and Limitations of ChatGPT and Biomedical Knowledge Graphs.从答案到见解:揭示ChatGPT与生物医学知识图谱的优势与局限
Res Sq. 2023 Aug 1:rs.3.rs-3185632. doi: 10.21203/rs.3.rs-3185632/v1.
9
Evaluating the effectiveness of prompt engineering for knowledge graph question answering.评估提示工程在知识图谱问答中的有效性。
Front Artif Intell. 2025 Jan 13;7:1454258. doi: 10.3389/frai.2024.1454258. eCollection 2024.
10
Combining large language models with enterprise knowledge graphs: a perspective on enhanced natural language understanding.将大语言模型与企业知识图谱相结合:增强自然语言理解的视角
Front Artif Intell. 2024 Aug 27;7:1460065. doi: 10.3389/frai.2024.1460065. eCollection 2024.

引用本文的文献

1
Improving Biomedical Knowledge Graph Quality: A Community Approach.提升生物医学知识图谱质量:一种社区方法。
ArXiv. 2025 Aug 29:arXiv:2508.21774v1.
2
textToKnowledgeGraph: Generation of Molecular Interaction Knowledge Graphs Using Large Language Models for Exploration in Cytoscape.文本到知识图谱:利用大语言模型生成分子相互作用知识图谱以在Cytoscape中进行探索
bioRxiv. 2025 Jul 21:2025.07.17.664328. doi: 10.1101/2025.07.17.664328.
3
A natural language processing approach to support biomedical data harmonization: Leveraging large language models.

本文引用的文献

1
Taiyi: a bilingual fine-tuned large language model for diverse biomedical tasks.太乙:一个用于多种生物医学任务的双语精调大型语言模型。
J Am Med Inform Assoc. 2024 Sep 1;31(9):1865-1874. doi: 10.1093/jamia/ocae037.
2
ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model Meta-AI (LLaMA) Using Medical Domain Knowledge.ChatDoctor:一种基于医学领域知识对大型语言模型Meta-AI(LLaMA)进行微调的医学聊天模型。
Cureus. 2023 Jun 24;15(6):e40895. doi: 10.7759/cureus.40895. eCollection 2023 Jun.
3
Large language models encode clinical knowledge.
一种支持生物医学数据协调的自然语言处理方法:利用大语言模型。
PLoS One. 2025 Jul 24;20(7):e0328262. doi: 10.1371/journal.pone.0328262. eCollection 2025.
大语言模型编码临床知识。
Nature. 2023 Aug;620(7972):172-180. doi: 10.1038/s41586-023-06291-2. Epub 2023 Jul 12.
4
SynLethDB 2.0: a web-based knowledge graph database on synthetic lethality for novel anticancer drug discovery.SynLethDB 2.0:一个基于网络的合成致死知识库,用于新型抗癌药物发现。
Database (Oxford). 2022 May 13;2022. doi: 10.1093/database/baac030.
5
Analysis of Drug Repositioning and Prediction Techniques: A Concise Review.药物重定位与预测技术分析:简要综述。
Curr Top Med Chem. 2022;22(23):1897-1906. doi: 10.2174/1568026622666220317164016.
6
Multimodal reasoning based on knowledge graph embedding for specific diseases.基于知识图嵌入的特定疾病的多模态推理。
Bioinformatics. 2022 Apr 12;38(8):2235-2245. doi: 10.1093/bioinformatics/btac085.
7
Learning without Forgetting.学过不忘。
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2935-2947. doi: 10.1109/TPAMI.2017.2773081. Epub 2017 Nov 14.
8
A prospective study of topical carteolol therapy in Chinese infants with superficial infantile hemangioma.中国婴儿浅表性婴幼儿血管瘤局部用卡替洛尔治疗的前瞻性研究。
Pediatr Dermatol. 2018 Jan;35(1):121-125. doi: 10.1111/pde.13361. Epub 2017 Dec 15.
9
The use of cellular thermal shift assay (CETSA) to study Crizotinib resistance in ALK-expressing human cancers.利用细胞热转移分析(CETSA)研究 ALK 表达的人类癌症对克唑替尼的耐药性。
Sci Rep. 2016 Sep 19;6:33710. doi: 10.1038/srep33710.
10
Current Strategies to Overcome Resistance to ALK-Inhibitor Agents.克服对ALK抑制剂耐药性的当前策略。
Curr Drug Metab. 2015;16(7):585-96. doi: 10.2174/1389200216666150812142059.