• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用可解释机器学习和大语言模型建立化学中人类可解释的结构-性质关系。

Human interpretable structure-property relationships in chemistry using explainable machine learning and large language models.

作者信息

Wellawatte Geemi P, Schwaller Philippe

机构信息

Laboratory of Artificial Chemical Intelligence, Institute of Chemical Sciences and Engineering, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.

National Centre of Competence in Research (NCCR) Catalysis, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.

出版信息

Commun Chem. 2025 Jan 14;8(1):11. doi: 10.1038/s42004-024-01393-y.

DOI:10.1038/s42004-024-01393-y
PMID:39809811
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11733140/
Abstract

Explainable Artificial Intelligence (XAI) is an emerging field in AI that aims to address the opaque nature of machine learning models. Furthermore, it has been shown that XAI can be used to extract input-output relationships, making them a useful tool in chemistry to understand structure-property relationships. However, one of the main limitations of XAI methods is that they are developed for technically oriented users. We propose the XpertAI framework that integrates XAI methods with large language models (LLMs) accessing scientific literature to generate accessible natural language explanations of raw chemical data automatically. We conducted 5 case studies to evaluate the performance of XpertAI. Our results show that XpertAI combines the strengths of LLMs and XAI tools in generating specific, scientific, and interpretable explanations.

摘要

可解释人工智能(XAI)是人工智能领域中一个新兴的领域,旨在解决机器学习模型不透明的本质问题。此外,研究表明XAI可用于提取输入-输出关系,使其成为化学领域中理解结构-性质关系的有用工具。然而,XAI方法的主要局限性之一在于它们是为技术导向型用户开发的。我们提出了XpertAI框架,该框架将XAI方法与可访问科学文献的大语言模型(LLMs)相结合,以自动生成对原始化学数据的易懂自然语言解释。我们进行了5个案例研究来评估XpertAI的性能。我们的结果表明,XpertAI在生成具体、科学且可解释的解释方面结合了大语言模型和XAI工具的优势。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c818/11733140/66ed3780235b/42004_2024_1393_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c818/11733140/a9ca6c1fffcd/42004_2024_1393_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c818/11733140/33045011a1e8/42004_2024_1393_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c818/11733140/b0107294f427/42004_2024_1393_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c818/11733140/3e9c260cc617/42004_2024_1393_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c818/11733140/66ed3780235b/42004_2024_1393_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c818/11733140/a9ca6c1fffcd/42004_2024_1393_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c818/11733140/33045011a1e8/42004_2024_1393_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c818/11733140/b0107294f427/42004_2024_1393_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c818/11733140/3e9c260cc617/42004_2024_1393_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c818/11733140/66ed3780235b/42004_2024_1393_Fig5_HTML.jpg

相似文献

1
Human interpretable structure-property relationships in chemistry using explainable machine learning and large language models.利用可解释机器学习和大语言模型建立化学中人类可解释的结构-性质关系。
Commun Chem. 2025 Jan 14;8(1):11. doi: 10.1038/s42004-024-01393-y.
2
Investigating Protective and Risk Factors and Predictive Insights for Aboriginal Perinatal Mental Health: Explainable Artificial Intelligence Approach.探究原住民围产期心理健康的保护因素、风险因素及预测性见解:可解释人工智能方法
J Med Internet Res. 2025 Apr 30;27:e68030. doi: 10.2196/68030.
3
From Black Boxes to Actionable Insights: A Perspective on Explainable Artificial Intelligence for Scientific Discovery.从黑箱到可操作的洞察:可解释人工智能在科学发现中的应用视角。
J Chem Inf Model. 2023 Dec 25;63(24):7617-7627. doi: 10.1021/acs.jcim.3c01642. Epub 2023 Dec 11.
4
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
5
Toward explainable AI (XAI) for mental health detection based on language behavior.迈向基于语言行为的可解释人工智能(XAI)用于心理健康检测。
Front Psychiatry. 2023 Dec 7;14:1219479. doi: 10.3389/fpsyt.2023.1219479. eCollection 2023.
6
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System.医学可解释人工智能(XAI)调查:最新进展、可解释性方法、人机交互和评分系统。
Sensors (Basel). 2022 Oct 21;22(20):8068. doi: 10.3390/s22208068.
7
Systematic literature review on the application of explainable artificial intelligence in palliative care studies.关于可解释人工智能在姑息治疗研究中应用的系统文献综述。
Int J Med Inform. 2025 Aug;200:105914. doi: 10.1016/j.ijmedinf.2025.105914. Epub 2025 Apr 8.
8
Explanatory pragmatism: a context-sensitive framework for explainable medical AI.解释性实用主义:一个用于可解释医学人工智能的上下文敏感框架。
Ethics Inf Technol. 2022;24(1):13. doi: 10.1007/s10676-022-09632-3. Epub 2022 Feb 28.
9
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
10
How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.可解释人工智能如何增加或降低临床医生对医疗保健中人工智能应用的信任:系统评价
JMIR AI. 2024 Oct 30;3:e53207. doi: 10.2196/53207.

引用本文的文献

1
Artificial Intelligence Paradigms for Next-Generation Metal-Organic Framework Research.面向下一代金属有机框架研究的人工智能范式
J Am Chem Soc. 2025 Jul 9;147(27):23367-23380. doi: 10.1021/jacs.5c08214. Epub 2025 Jun 24.
2
A review of large language models and autonomous agents in chemistry.化学领域中大型语言模型与自主智能体的综述。
Chem Sci. 2024 Dec 9;16(6):2514-2572. doi: 10.1039/d4sc03921a. eCollection 2025 Feb 5.

本文引用的文献

1
Augmenting large language models with chemistry tools.用化学工具增强大语言模型。
Nat Mach Intell. 2024;6(5):525-535. doi: 10.1038/s42256-024-00832-8. Epub 2024 May 8.
2
PMC-LLaMA: toward building open-source language models for medicine.PMC-LLaMA:为医学构建开源语言模型的努力。
J Am Med Inform Assoc. 2024 Sep 1;31(9):1833-1843. doi: 10.1093/jamia/ocae045.
3
Evaluation of Open-Source Large Language Models for Metal-Organic Frameworks Research.开源大型语言模型在金属有机骨架研究中的评估。
J Chem Inf Model. 2024 Jul 8;64(13):4958-4965. doi: 10.1021/acs.jcim.4c00065. Epub 2024 Mar 26.
4
Autonomous chemical research with large language models.大语言模型驱动的自主化学研究。
Nature. 2023 Dec;624(7992):570-578. doi: 10.1038/s41586-023-06792-0. Epub 2023 Dec 20.
5
ChatGPT Chemistry Assistant for Text Mining and the Prediction of MOF Synthesis.用于文本挖掘和金属有机框架合成预测的ChatGPT化学助手
J Am Chem Soc. 2023 Aug 16;145(32):18048-18062. doi: 10.1021/jacs.3c05819. Epub 2023 Aug 7.
6
Large language models encode clinical knowledge.大语言模型编码临床知识。
Nature. 2023 Aug;620(7972):172-180. doi: 10.1038/s41586-023-06291-2. Epub 2023 Jul 12.
7
The future of chemistry is language.化学的未来在于语言。
Nat Rev Chem. 2023 Jul;7(7):457-458. doi: 10.1038/s41570-023-00502-0.
8
Deep Neural Networks and Tabular Data: A Survey.深度神经网络与表格数据:一项综述。
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):7499-7519. doi: 10.1109/TNNLS.2022.3229161. Epub 2024 Jun 3.
9
A Perspective on Explanations of Molecular Prediction Models.分子预测模型解释的透视。
J Chem Theory Comput. 2023 Apr 25;19(8):2149-2160. doi: 10.1021/acs.jctc.2c01235. Epub 2023 Mar 27.
10
AlphaFold accelerates artificial intelligence powered drug discovery: efficient discovery of a novel CDK20 small molecule inhibitor.AlphaFold加速人工智能驱动的药物发现:高效发现新型CDK20小分子抑制剂。
Chem Sci. 2023 Jan 10;14(6):1443-1452. doi: 10.1039/d2sc05709c. eCollection 2023 Feb 8.