• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于实际化学和材料科学应用的微调大语言模型评估。

Assessment of fine-tuned large language models for real-world chemistry and material science applications.

作者信息

Van Herck Joren, Gil María Victoria, Jablonka Kevin Maik, Abrudan Alex, Anker Andy S, Asgari Mehrdad, Blaiszik Ben, Buffo Antonio, Choudhury Leander, Corminboeuf Clemence, Daglar Hilal, Elahi Amir Mohammad, Foster Ian T, Garcia Susana, Garvin Matthew, Godin Guillaume, Good Lydia L, Gu Jianan, Xiao Hu Noémie, Jin Xin, Junkers Tanja, Keskin Seda, Knowles Tuomas P J, Laplaza Ruben, Lessona Michele, Majumdar Sauradeep, Mashhadimoslem Hossein, McIntosh Ruaraidh D, Moosavi Seyed Mohamad, Mouriño Beatriz, Nerli Francesca, Pevida Covadonga, Poudineh Neda, Rajabi-Kochi Mahyar, Saar Kadi L, Hooriabad Saboor Fahimeh, Sagharichiha Morteza, Schmidt K J, Shi Jiale, Simone Elena, Svatunek Dennis, Taddei Marco, Tetko Igor, Tolnai Domonkos, Vahdatifar Sahar, Whitmer Jonathan, Wieland D C Florian, Willumeit-Römer Regine, Züttel Andreas, Smit Berend

机构信息

Laboratory of Molecular Simulation (LSMO), Institut des Sciences et Ingénierie Chimiques, École Polytechnique Fédérale de Lausanne (EPFL) Rue de l'Industrie 17 CH-1951 Sion Switzerland

Instituto de Ciencia y TecnologÍa del Carbono (INCAR), CSIC Francisco Pintado Fe 26 33011 Oviedo Spain.

出版信息

Chem Sci. 2024 Nov 22;16(2):670-684. doi: 10.1039/d4sc04401k. eCollection 2025 Jan 2.

DOI:10.1039/d4sc04401k
PMID:39664810
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11629507/
Abstract

The current generation of large language models (LLMs) has limited chemical knowledge. Recently, it has been shown that these LLMs can learn and predict chemical properties through fine-tuning. Using natural language to train machine learning models opens doors to a wider chemical audience, as field-specific featurization techniques can be omitted. In this work, we explore the potential and limitations of this approach. We studied the performance of fine-tuning three open-source LLMs (GPT-J-6B, Llama-3.1-8B, and Mistral-7B) for a range of different chemical questions. We benchmark their performances against "traditional" machine learning models and find that, in most cases, the fine-tuning approach is superior for a simple classification problem. Depending on the size of the dataset and the type of questions, we also successfully address more sophisticated problems. The most important conclusions of this work are that, for all datasets considered, their conversion into an LLM fine-tuning training set is straightforward and that fine-tuning with even relatively small datasets leads to predictive models. These results suggest that the systematic use of LLMs to guide experiments and simulations will be a powerful technique in any research study, significantly reducing unnecessary experiments or computations.

摘要

当前一代的大语言模型(LLMs)化学知识有限。最近有研究表明,这些大语言模型可以通过微调来学习和预测化学性质。使用自然语言训练机器学习模型为更广泛的化学领域受众打开了大门,因为可以省略特定领域的特征提取技术。在这项工作中,我们探索了这种方法的潜力和局限性。我们研究了对三个开源大语言模型(GPT-J-6B、Llama-3.1-8B和Mistral-7B)进行微调以处理一系列不同化学问题时的性能。我们将它们的性能与“传统”机器学习模型进行基准测试,发现在大多数情况下,微调方法在简单分类问题上更具优势。根据数据集的大小和问题的类型,我们还成功解决了更复杂的问题。这项工作最重要的结论是,对于所有考虑的数据集,将其转换为大语言模型微调训练集很简单,并且即使使用相对较小的数据集进行微调也能得到预测模型。这些结果表明,在任何研究中,系统地使用大语言模型来指导实验和模拟将是一种强大的技术,可显著减少不必要的实验或计算。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/70b75b860a1b/d4sc04401k-f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/7501d2a1b40f/d4sc04401k-f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/1c46ced7115f/d4sc04401k-f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/f5f8a0e1d5ae/d4sc04401k-f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/ff0e2667ed83/d4sc04401k-f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/c7393402eaa7/d4sc04401k-f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/70b75b860a1b/d4sc04401k-f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/7501d2a1b40f/d4sc04401k-f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/1c46ced7115f/d4sc04401k-f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/f5f8a0e1d5ae/d4sc04401k-f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/ff0e2667ed83/d4sc04401k-f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/c7393402eaa7/d4sc04401k-f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e20f/11694951/70b75b860a1b/d4sc04401k-f6.jpg

相似文献

1
Assessment of fine-tuned large language models for real-world chemistry and material science applications.用于实际化学和材料科学应用的微调大语言模型评估。
Chem Sci. 2024 Nov 22;16(2):670-684. doi: 10.1039/d4sc04401k. eCollection 2025 Jan 2.
2
Assessing Completeness of Clinical Histories Accompanying Imaging Orders Using Adapted Open-Source and Closed-Source Large Language Models.使用适配的开源和闭源大语言模型评估影像检查申请单所附临床病史的完整性
Radiology. 2025 Feb;314(2):e241051. doi: 10.1148/radiol.241051.
3
A dataset and benchmark for hospital course summarization with adapted large language models.一个用于医院病程总结的数据集和基准测试,采用了适配的大语言模型。
J Am Med Inform Assoc. 2025 Mar 1;32(3):470-479. doi: 10.1093/jamia/ocae312.
4
Open-source LLMs for text annotation: a practical guide for model setting and fine-tuning.用于文本标注的开源语言模型:模型设置与微调实用指南。
J Comput Soc Sci. 2025;8(1):17. doi: 10.1007/s42001-024-00345-9. Epub 2024 Dec 18.
5
Privacy-ensuring Open-weights Large Language Models Are Competitive with Closed-weights GPT-4o in Extracting Chest Radiography Findings from Free-Text Reports.在从自由文本报告中提取胸部X光检查结果方面,确保隐私的开放权重大型语言模型与封闭权重的GPT-4o具有竞争力。
Radiology. 2025 Jan;314(1):e240895. doi: 10.1148/radiol.240895.
6
Performance and Reproducibility of Large Language Models in Named Entity Recognition: Considerations for the Use in Controlled Environments.大型语言模型在命名实体识别中的性能与可重复性:在受控环境中使用的考量
Drug Saf. 2025 Mar;48(3):287-303. doi: 10.1007/s40264-024-01499-1. Epub 2024 Dec 11.
7
Advancing entity recognition in biomedicine via instruction tuning of large language models.通过指令调整大型语言模型推进生物医学中的实体识别。
Bioinformatics. 2024 Mar 29;40(4). doi: 10.1093/bioinformatics/btae163.
8
PH-LLM: Public Health Large Language Models for Infoveillance.PH-LLM:用于信息监测的公共卫生大语言模型
medRxiv. 2025 Feb 10:2025.02.08.25321587. doi: 10.1101/2025.02.08.25321587.
9
Evaluating the effectiveness of biomedical fine-tuning for large language models on clinical tasks.评估生物医学微调对大语言模型在临床任务上的有效性。
J Am Med Inform Assoc. 2025 Jun 1;32(6):1015-1024. doi: 10.1093/jamia/ocaf045.
10
Distilling large language models for matching patients to clinical trials.提炼大型语言模型以实现患者与临床试验的匹配。
J Am Med Inform Assoc. 2024 Sep 1;31(9):1953-1963. doi: 10.1093/jamia/ocae073.

引用本文的文献

1
Advancing plant metabolic research by using large language models to expand databases and extract labeled data.通过使用大语言模型扩展数据库并提取标记数据来推进植物代谢研究。
Appl Plant Sci. 2025 May 14;13(4):e70007. doi: 10.1002/aps3.70007. eCollection 2025 Jul-Aug.
2
Recent advances in MOF composites for photocatalysis.用于光催化的金属有机框架复合材料的最新进展。
Chem Sci. 2025 Jun 27. doi: 10.1039/d5sc03065j.
3
Artificial Intelligence Paradigms for Next-Generation Metal-Organic Framework Research.面向下一代金属有机框架研究的人工智能范式

本文引用的文献

1
Chemprop: A Machine Learning Package for Chemical Property Prediction.Chemprop:一个用于化学性质预测的机器学习工具包。
J Chem Inf Model. 2024 Jan 8;64(1):9-17. doi: 10.1021/acs.jcim.3c01250. Epub 2023 Dec 26.
2
14 examples of how LLMs can transform materials science and chemistry: a reflection on a large language model hackathon.大语言模型如何改变材料科学与化学的14个实例:对一场大语言模型黑客马拉松的思考
Digit Discov. 2023 Aug 8;2(5):1233-1250. doi: 10.1039/d3dd00113j. eCollection 2023 Oct 9.
3
Do Large Language Models Understand Chemistry? A Conversation with ChatGPT.
J Am Chem Soc. 2025 Jul 9;147(27):23367-23380. doi: 10.1021/jacs.5c08214. Epub 2025 Jun 24.
4
Exploring the chemical design space of metal-organic frameworks for photocatalysis.探索用于光催化的金属有机框架的化学设计空间。
Chem Sci. 2025 May 13. doi: 10.1039/d5sc01100k.
5
A Perspective on Foundation Models in Chemistry.化学领域基础模型的视角
JACS Au. 2025 Mar 25;5(4):1499-1518. doi: 10.1021/jacsau.4c01160. eCollection 2025 Apr 28.
6
Explainable Synthesizability Prediction of Inorganic Crystal Polymorphs Using Large Language Models.使用大语言模型对无机晶体多晶型物进行可解释的可合成性预测
Angew Chem Int Ed Engl. 2025 May;64(19):e202423950. doi: 10.1002/anie.202423950. Epub 2025 Mar 22.
大语言模型理解化学吗?与ChatGPT的一次对话。
J Chem Inf Model. 2023 Mar 27;63(6):1649-1655. doi: 10.1021/acs.jcim.3c00285. Epub 2023 Mar 16.
4
Science-Driven Atomistic Machine Learning.科学驱动的原子级机器学习。
Angew Chem Int Ed Engl. 2023 Jun 26;62(26):e202219170. doi: 10.1002/anie.202219170. Epub 2023 Apr 13.
5
DeepStruc: towards structure solution from pair distribution function data using deep generative models.深度结构:利用深度生成模型从对分布函数数据求解结构
Digit Discov. 2022 Nov 28;2(1):69-80. doi: 10.1039/d2dd00086e. eCollection 2023 Feb 13.
6
Predictive chemistry: machine learning for reaction deployment, reaction development, and reaction discovery.预测化学:用于反应部署、反应开发和反应发现的机器学习
Chem Sci. 2022 Nov 28;14(2):226-244. doi: 10.1039/d2sc05089g. eCollection 2023 Jan 4.
7
Predicting Adhesive Free Energies of Polymer-Surface Interactions with Machine Learning.利用机器学习预测聚合物-表面相互作用的粘附自由能
ACS Appl Mater Interfaces. 2022 Aug 17;14(32):37161-37169. doi: 10.1021/acsami.2c08891. Epub 2022 Aug 2.
8
Combining Machine Learning and Molecular Simulations to Unlock Gas Separation Potentials of MOF Membranes and MOF/Polymer MMMs.结合机器学习与分子模拟以挖掘金属有机框架膜及金属有机框架/聚合物混合基质膜的气体分离潜力
ACS Appl Mater Interfaces. 2022 Jul 20;14(28):32134-32148. doi: 10.1021/acsami.2c08977. Epub 2022 Jul 11.
9
Making the collective knowledge of chemistry open and machine actionable.使化学的集体知识开放并可用于机器操作。
Nat Chem. 2022 Apr;14(4):365-376. doi: 10.1038/s41557-022-00910-7. Epub 2022 Apr 4.
10
Learning the molecular grammar of protein condensates from sequence determinants and embeddings.从序列决定因素和嵌入中学习蛋白质凝聚物的分子语法。
Proc Natl Acad Sci U S A. 2021 Apr 13;118(15). doi: 10.1073/pnas.2019053118.