Chang Han-Chung, Tsai Ming-Hsuan, Li Yi-Pei
Department of Chemical Engineering, National Taiwan University, No. 1, Section 4, Roosevelt Road, Taipei 10617, Taiwan.
Taiwan International Graduate Program on Sustainable Chemical Science and Technology (TIGP-SCST), No. 128, Section 2, Academia Road, Taipei 11529, Taiwan.
J Chem Inf Model. 2025 Feb 10;65(3):1367-1377. doi: 10.1021/acs.jcim.4c02319. Epub 2025 Jan 25.
Accurately predicting activation energies is crucial for understanding chemical reactions and modeling complex reaction systems. However, the high computational cost of quantum chemistry methods often limits the feasibility of large-scale studies, leading to a scarcity of high-quality activation energy data. In this work, we explore and compare three innovative approaches (transfer learning, delta learning, and feature engineering) to enhance the accuracy of activation energy predictions using graph neural networks, specifically focusing on methods that incorporate low-cost, low-level computational data. Using the Chemprop model, we systematically evaluated how these methods leverage data from semiempirical quantum mechanics (SQM) calculations to improve predictions. Delta learning, which adjusts low-level SQM activation energies to align with high-level CCSD(T)-F12a targets, emerged as the most effective method, achieving high accuracy with substantially reduced data requirements. Notably, delta learning trained with just 20-30% of high-level data matched or exceeded the performance of other methods trained with full data sets, making it advantageous in data-scarce scenarios. However, its reliance on transition state searches imposes significant computational demands during model application. Transfer learning, which pretrains models on large data sets of low-level data, provided mixed results, particularly when there was a mismatch in the reaction distributions between the training and target data sets. Feature engineering, which involves adding computed molecular properties as input features, showed modest gains, particularly in thermodynamic properties. Our study highlights the trade-offs between accuracy and computational demand in selecting the best approach for enhancing activation energy predictions. These insights provide valuable guidelines for researchers aiming to apply machine learning in chemical reaction engineering, helping to balance accuracy with resource constraints.
准确预测活化能对于理解化学反应和模拟复杂反应系统至关重要。然而,量子化学方法的高计算成本常常限制了大规模研究的可行性,导致高质量活化能数据匮乏。在这项工作中,我们探索并比较了三种创新方法(迁移学习、增量学习和特征工程),以提高使用图神经网络预测活化能的准确性,特别关注结合低成本、低水平计算数据的方法。使用Chemprop模型,我们系统地评估了这些方法如何利用半经验量子力学(SQM)计算的数据来改进预测。增量学习将低水平SQM活化能进行调整以与高水平CCSD(T)-F12a目标对齐,成为最有效的方法,在大幅降低数据需求的情况下实现了高精度。值得注意的是,仅用20%-30%的高水平数据训练的增量学习达到或超过了用完整数据集训练的其他方法的性能,使其在数据稀缺的情况下具有优势。然而,其对过渡态搜索的依赖在模型应用过程中带来了巨大的计算需求。迁移学习在低水平数据的大数据集上预训练模型,结果好坏参半,特别是当训练数据集和目标数据集之间的反应分布不匹配时。特征工程,即将计算出的分子性质作为输入特征添加,显示出适度的提升,特别是在热力学性质方面。我们的研究突出了在选择增强活化能预测的最佳方法时准确性和计算需求之间的权衡。这些见解为旨在将机器学习应用于化学反应工程的研究人员提供了有价值的指导方针,有助于在准确性和资源限制之间取得平衡。