Department of Computer Science, San Francisco State University, San Francisco, California 94132, USA.
Department of Chemistry, Boston College, Chestnut Hill, Massachusetts 02467, USA.
J Chem Phys. 2022 Apr 7;156(13):134109. doi: 10.1063/5.0087165.
Recent work has demonstrated the promise of using machine-learned surrogates, in particular, Gaussian process (GP) surrogates, in reducing the number of electronic structure calculations (ESCs) needed to perform surrogate model based (SMB) geometry optimization. In this paper, we study geometry meta-optimization with GP surrogates where a SMB optimizer additionally learns from its past "experience" performing geometry optimization. To validate this idea, we start with the simplest setting where a geometry meta-optimizer learns from previous optimizations of the same molecule with different initial-guess geometries. We give empirical evidence that geometry meta-optimization with GP surrogates is effective and requires less tuning compared to SMB optimization with GP surrogates on the ANI-1 dataset of off-equilibrium initial structures of small organic molecules. Unlike SMB optimization where a surrogate should be immediately useful for optimizing a given geometry, a surrogate in geometry meta-optimization has more flexibility because it can distribute its ESC savings across a set of geometries. Indeed, we find that GP surrogates that preserve rotational invariance provide increased marginal ESC savings across geometries. As a more stringent test, we also apply geometry meta-optimization to conformational search on a hand-constructed dataset of hydrocarbons and alcohols. We observe that while SMB optimization and geometry meta-optimization do save on ESCs, they also tend to miss higher energy conformers compared to standard geometry optimization. We believe that further research into characterizing the divergence between GP surrogates and potential energy surfaces is critical not only for advancing geometry meta-optimization but also for exploring the potential of machine-learned surrogates in geometry optimization in general.
最近的工作表明,使用机器学习代理(尤其是高斯过程 (GP) 代理)可以减少进行基于代理模型的 (SMB) 几何优化所需的电子结构计算 (ESC) 的数量。在本文中,我们研究了基于 GP 代理的几何元优化,其中 SMB 优化器还可以从其过去执行几何优化的“经验”中学习。为了验证这一想法,我们从最简单的设置开始,其中几何元优化器从具有不同初始猜测几何形状的同一分子的先前优化中学习。我们提供了经验证据,表明与基于 ANI-1 数据集的非平衡小分子初始结构的 GP 代理 SMB 优化相比,基于 GP 代理的几何元优化是有效的,并且需要更少的调整。与 SMB 优化不同,代理在优化给定几何形状时应该立即有用,而在几何元优化中,代理具有更多的灵活性,因为它可以将其 ESC 节省分配到一组几何形状上。事实上,我们发现保留旋转不变性的 GP 代理在几何形状上提供了更高的边际 ESC 节省。作为更严格的测试,我们还将几何元优化应用于手动构建的烃和醇数据集上的构象搜索。我们观察到,虽然 SMB 优化和几何元优化确实可以节省 ESC,但与标准几何优化相比,它们也容易错过更高能量的构象。我们认为,进一步研究 GP 代理和势能面之间的差异不仅对于推进几何元优化至关重要,而且对于探索机器学习代理在一般几何优化中的潜力也至关重要。