Suppr超能文献

通过提示工程将化学知识整合到大型语言模型中。

Integrating chemistry knowledge in large language models via prompt engineering.

作者信息

Liu Hongxuan, Yin Haoyu, Luo Zhiyao, Wang Xiaonan

机构信息

Department of Chemical Engineering, Tsinghua University, Beijing, 100084, China.

Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, Oxford, OX3 7DQ, United Kingdom.

出版信息

Synth Syst Biotechnol. 2024 Jul 24;10(1):23-38. doi: 10.1016/j.synbio.2024.07.004. eCollection 2025.

Abstract

This paper presents a study on the integration of domain-specific knowledge in prompt engineering to enhance the performance of large language models (LLMs) in scientific domains. The proposed domain-knowledge embedded prompt engineering method outperforms traditional prompt engineering strategies on various metrics, including capability, accuracy, F1 score, and hallucination drop. The effectiveness of the method is demonstrated through case studies on complex materials including the MacMillan catalyst, paclitaxel, and lithium cobalt oxide. The results suggest that domain-knowledge prompts can guide LLMs to generate more accurate and relevant responses, highlighting the potential of LLMs as powerful tools for scientific discovery and innovation when equipped with domain-specific prompts. The study also discusses limitations and future directions for domain-specific prompt engineering development.

摘要

本文介绍了一项关于在提示工程中整合特定领域知识以提高大语言模型(LLMs)在科学领域性能的研究。所提出的领域知识嵌入提示工程方法在各种指标上优于传统提示工程策略,包括能力、准确性、F1分数和幻觉率下降。通过对包括麦克米伦催化剂、紫杉醇和钴酸锂在内的复杂材料的案例研究,证明了该方法的有效性。结果表明,领域知识提示可以引导大语言模型生成更准确和相关的回答,突出了在配备特定领域提示时,大语言模型作为科学发现和创新强大工具的潜力。该研究还讨论了特定领域提示工程发展的局限性和未来方向。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验