Suppr超能文献

训练算法对神经网络势的性能至关重要:以 Adam 和卡尔曼滤波器优化器为例。

Training algorithm matters for the performance of neural network potential: A case study of Adam and the Kalman filter optimizers.

机构信息

Department of Chemistry-Ångström Laboratory, Uppsala University, Lägerhyddsvägen 1, P.O. Box 538, 75121 Uppsala, Sweden.

Division of Scientific Computing, Department of Information Technology, SciLifeLab, Uppsala University, Lägerhyddsvägen 2, P.O. Box 337, 75105 Uppsala, Sweden.

出版信息

J Chem Phys. 2021 Nov 28;155(20):204108. doi: 10.1063/5.0070931.

Abstract

One hidden yet important issue for developing neural network potentials (NNPs) is the choice of training algorithm. In this article, we compare the performance of two popular training algorithms, the adaptive moment estimation algorithm (Adam) and the extended Kalman filter algorithm (EKF), using the Behler-Parrinello neural network and two publicly accessible datasets of liquid water [Morawietz et al., Proc. Natl. Acad. Sci. U. S. A. 113, 8368-8373, (2016) and Cheng et al., Proc. Natl. Acad. Sci. U. S. A. 116, 1110-1115, (2019)]. This is achieved by implementing EKF in TensorFlow. It is found that NNPs trained with EKF are more transferable and less sensitive to the value of the learning rate, as compared to Adam. In both cases, error metrics of the validation set do not always serve as a good indicator for the actual performance of NNPs. Instead, we show that their performance correlates well with a Fisher information based similarity measure.

摘要

开发神经网络势(NNP)的一个隐藏但重要的问题是训练算法的选择。在本文中,我们使用 Behler-Parrinello 神经网络和两个公开可用的液态水数据集 [Morawietz 等人,Proc. Natl. Acad. Sci. U. S. A. 113, 8368-8373, (2016) 和 Cheng 等人,Proc. Natl. Acad. Sci. U. S. A. 116, 1110-1115, (2019)],比较了两种流行的训练算法,自适应矩估计算法(Adam)和扩展卡尔曼滤波器算法(EKF)的性能。这是通过在 TensorFlow 中实现 EKF 来实现的。结果发现,与 Adam 相比,用 EKF 训练的 NNP 更具可转移性,对学习率的取值也不那么敏感。在这两种情况下,验证集的误差指标并不总是 NNP 实际性能的良好指标。相反,我们表明它们的性能与基于 Fisher 信息的相似性度量很好地相关。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验