Department of Chemistry, University of Rochester, Rochester, New York 14627, United States.
Department of Chemical Engineering, University of Rochester, Rochester, New York 14627, United States.
J Chem Theory Comput. 2023 Apr 25;19(8):2149-2160. doi: 10.1021/acs.jctc.2c01235. Epub 2023 Mar 27.
Chemists can be skeptical in using deep learning (DL) in decision making, due to the lack of interpretability in "black-box" models. Explainable artificial intelligence (XAI) is a branch of artificial intelligence (AI) which addresses this drawback by providing tools to interpret DL models and their predictions. We review the principles of XAI in the domain of chemistry and emerging methods for creating and evaluating explanations. Then, we focus on methods developed by our group and their applications in predicting solubility, blood-brain barrier permeability, and the scent of molecules. We show that XAI methods like chemical counterfactuals and descriptor explanations can explain DL predictions while giving insight into structure-property relationships. Finally, we discuss how a two-step process of developing a black-box model and explaining predictions can uncover structure-property relationships.
化学家在决策中可能对使用深度学习(DL)持怀疑态度,因为“黑盒”模型缺乏可解释性。可解释人工智能(XAI)是人工智能(AI)的一个分支,通过提供工具来解释 DL 模型及其预测,可以解决这一缺点。我们回顾了化学领域的 XAI 原理和新兴的创建和评估解释的方法。然后,我们专注于我们小组开发的方法及其在预测溶解度、血脑屏障渗透性和分子气味方面的应用。我们表明,像化学反事实和描述符解释这样的 XAI 方法可以解释 DL 预测,同时深入了解结构-性质关系。最后,我们讨论了如何通过开发黑盒模型和解释预测的两步过程来揭示结构-性质关系。