Giese Timothy J, Zeng Jinzhe, York Darrin M
Laboratory for Biomolecular Simulation Research, Institute for Quantitative Biomedicine, and Department of Chemistry and Chemical Biology, Rutgers University, Piscataway 08854, New Jersey, United States.
School of Artificial Intelligence and Data Science, University of Science and Technology of China, Hefei 230026, China.
J Phys Chem B. 2025 Jun 5;129(22):5477-5490. doi: 10.1021/acs.jpcb.5c02006. Epub 2025 May 26.
We previously introduced a "range corrected" Δ-machine learning potential (ΔMLP) that used deep neural networks to improve the accuracy of combined quantum mechanical/molecular mechanical (QM/MM) simulations by correcting both the internal QM and QM/MM interaction energies and forces [J. Chem. Theory Comput. 2021, 17, 6993-7009]. The present work extends this approach to include graph neural networks. Specifically, the approach is applied to the MACE message passing neural network architecture, and a series of AM1/d + MACE models are trained to reproduce PBE0/6-31G* QM/MM energies and forces of model phosphoryl transesterification reactions. Several models are designed to test the transferability of AM1/d + MACE by varying the amount of training data and calculating free energy surfaces of reactions that were not included in the parameter refinement. The transferability is compared to AM1/d + DP models that use the DeepPot-SE (DP) deep neural network architecture. The AM1/d + MACE models are found to reproduce the target free energy surfaces even in instances where the AM1/d + DP models exhibit inaccuracies. We train "end-state" models that include data only from the reactant and product states of the 6 reactions. Unlike the uncorrected AM1/d profiles, the AM1/d + MACE method correctly reproduces a stable pentacoordinated phosphorus intermediate even though the training did not include structures with a similar bonding pattern. Furthermore, the message passing mechanism hyperparameters defining the MACE network are varied to explore their effect on the model's accuracy and performance. The AM1/d + MACE simulations are 28% slower than AM1/d QM/MM when the ΔMLP correction is performed on a graphics processing unit. Our results suggest that the MACE architecture may lead to ΔMLP models with improved transferability.
我们之前引入了一种“范围校正”的Δ机器学习势(ΔMLP),它使用深度神经网络通过校正内部量子力学(QM)以及QM/分子力学(MM)相互作用能和力来提高组合量子力学/分子力学(QM/MM)模拟的准确性[《化学理论与计算杂志》2021年,17卷,6993 - 7009页]。目前的工作将此方法扩展到包括图神经网络。具体而言,该方法应用于MACE消息传递神经网络架构,并训练了一系列AM1/d + MACE模型以重现模型磷酸酯转移反应的PBE0/6 - 31G* QM/MM能量和力。设计了几个模型,通过改变训练数据量并计算未包含在参数优化中的反应的自由能表面来测试AM1/d + MACE的可转移性。将这种可转移性与使用DeepPot - SE(DP)深度神经网络架构的AM1/d + DP模型进行比较。发现即使在AM1/d + DP模型表现出不准确的情况下,AM1/d + MACE模型也能重现目标自由能表面。我们训练了“终态”模型,这些模型仅包含6个反应的反应物和产物状态的数据。与未校正的AM1/d分布不同,即使训练中不包括具有类似键合模式的结构,AM1/d + MACE方法也能正确重现稳定的五配位磷中间体。此外,改变定义MACE网络的消息传递机制超参数,以探索它们对模型准确性和性能的影响。当在图形处理单元上进行ΔMLP校正时,AM1/d + MACE模拟比AM1/d QM/MM慢28%。我们的结果表明,MACE架构可能会产生具有更好可转移性的ΔMLP模型。