Department of Chemistry and Supercomputing Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431, United States.
J Phys Chem A. 2023 Jun 22;127(24):5287-5297. doi: 10.1021/acs.jpca.3c02627. Epub 2023 Jun 12.
Machine-learned representations of potential energy surfaces generated in the output layer of a feedforward neural network are becoming increasingly popular. One difficulty with neural network output is that it is often unreliable in regions where training data is missing or sparse. Human-designed potentials often build in proper extrapolation behavior by choice of functional form. Because machine learning is very efficient, it is desirable to learn how to add human intelligence to machine-learned potentials in a convenient way. One example is the well-understood feature of interaction potentials that they vanish when subsystems are too far separated to interact. In this article, we present a way to add a new kind of activation function to a neural network to enforce low-dimensional constraints. In particular, the activation function depends parametrically on all of the input variables. We illustrate the use of this step by showing how it can force an interaction potential to go to zero at large subsystem separations without either inputting a specific functional form for the potential or adding data to the training set in the asymptotic region of geometries where the subsystems are separated. In the process of illustrating this, we present an improved set of potential energy surfaces for the 14 lowest ' states of O. The method is more general than this example, and it may be used to add other low-dimensional knowledge or lower-level knowledge to machine-learned potentials. In addition to the O example, we present a greater-generality method called parametrically managed diabatization by deep neural network (PM-DDNN) that is an improvement on our previously presented permutationally restrained diabatization by deep neural network (PR-DDNN).
基于前馈神经网络输出层生成的势能面的机器学习表示形式正变得越来越流行。神经网络输出的一个困难之处在于,在训练数据缺失或稀疏的区域,它通常是不可靠的。通过选择功能形式,人为设计的势能通常会内置适当的外推行为。由于机器学习非常高效,因此希望以方便的方式学习如何将人类智能添加到机器学习势中。一个例子是相互作用势的一个很好理解的特征,即当子系统相隔太远而无法相互作用时,它们就会消失。在本文中,我们提出了一种向神经网络添加新类型的激活函数的方法,以强制施加低维约束。特别是,激活函数参数化地取决于所有输入变量。我们通过展示如何在不输入势的特定函数形式或不在子系统分离的几何形状的渐近区域向训练集中添加数据的情况下,迫使相互作用势在大子系统分离时变为零,来演示该步骤的使用。在演示的过程中,我们为 O 的 14 个最低“态”呈现了一套改进的势能面。该方法比这个示例更通用,它可以用于将其他低维知识或较低层次的知识添加到机器学习势中。除了 O 的示例,我们还提出了一种称为通过深度神经网络的参数化管理双态化(PM-DDNN)的更通用的方法,这是我们之前提出的通过深度神经网络的排列约束双态化(PR-DDNN)的改进。