Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA.
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA.
J Chem Phys. 2023 May 14;158(18). doi: 10.1063/5.0142127.
Atomistic machine learning focuses on the creation of models that obey fundamental symmetries of atomistic configurations, such as permutation, translation, and rotation invariances. In many of these schemes, translation and rotation invariance are achieved by building on scalar invariants, e.g., distances between atom pairs. There is growing interest in molecular representations that work internally with higher rank rotational tensors, e.g., vector displacements between atoms, and tensor products thereof. Here, we present a framework for extending the Hierarchically Interacting Particle Neural Network (HIP-NN) with Tensor Sensitivity information (HIP-NN-TS) from each local atomic environment. Crucially, the method employs a weight tying strategy that allows direct incorporation of many-body information while adding very few model parameters. We show that HIP-NN-TS is more accurate than HIP-NN, with negligible increase in parameter count, for several datasets and network sizes. As the dataset becomes more complex, tensor sensitivities provide greater improvements to model accuracy. In particular, HIP-NN-TS achieves a record mean absolute error of 0.927 kcalmol for conformational energy variation on the challenging COMP6 benchmark, which includes a broad set of organic molecules. We also compare the computational performance of HIP-NN-TS to HIP-NN and other models in the literature.
原子级机器学习专注于创建遵守原子构型基本对称性的模型,如置换、平移和旋转不变性。在这些方案中,许多方案通过构建标量不变量(例如原子对之间的距离)来实现平移和旋转不变性。人们越来越关注在内部使用高阶旋转张量(例如原子之间的向量位移及其张量积)的分子表示。在这里,我们提出了一种从每个局部原子环境扩展分层相互作用粒子神经网络 (HIP-NN) 的框架,即张量灵敏度信息 (HIP-NN-TS)。至关重要的是,该方法采用权重绑定策略,允许在添加很少模型参数的情况下直接纳入多体信息。我们表明,对于几个数据集和网络大小,HIP-NN-TS 比 HIP-NN 更准确,而参数数量几乎没有增加。随着数据集变得更加复杂,张量灵敏度为模型准确性提供了更大的改进。特别是,HIP-NN-TS 在具有挑战性的 COMP6 基准上实现了构象能变分的记录平均绝对误差为 0.927 kcalmol,其中包括广泛的有机分子。我们还将 HIP-NN-TS 的计算性能与 HIP-NN 和文献中的其他模型进行了比较。