Suppr超能文献

基准测试结构演化方法以训练机器学习原子间势。

Benchmarking structural evolution methods for training of machine learned interatomic potentials.

机构信息

Department of Materials Science and Engineering, Northwestern University, Evanston, IL 60208, United States of America.

出版信息

J Phys Condens Matter. 2022 Jul 22;34(38). doi: 10.1088/1361-648X/ac7f73.

Abstract

When creating training data for machine-learned interatomic potentials (MLIPs), it is common to create initial structures and evolve them using molecular dynamics (MD) to sample a larger configuration space. We benchmark two other modalities of evolving structures, contour exploration (CE) and dimer-method (DM) searches against MD for their ability to produce diverse and robust density functional theory training data sets for MLIPs. We also discuss the generation of initial structures which are either from known structures or from random structures in detail to further formalize the structure-sourcing processes in the future. The polymorph-rich zirconium-oxygen composition space is used as a rigorous benchmark system for comparing the performance of MLIPs trained on structures generated from these structural evolution methods. Using Behler-Parrinello neural networks as our MLIP models, we find that CE and the DM searches are generally superior to MD in terms of spatial descriptor diversity and statistical accuracy.

摘要

在为机器学习原子间势(MLIP)创建训练数据时,通常会创建初始结构并使用分子动力学(MD)对其进行演化,以在更大的构型空间中进行采样。我们针对 MD 对另外两种演化结构的方法——轮廓探索(CE)和二聚体方法(DM)搜索的能力进行了基准测试,以评估它们在生成多样化和稳健的密度泛函理论(DFT)MLIP 训练数据集方面的能力。我们还详细讨论了初始结构的生成,这些结构要么来自已知结构,要么来自随机结构,以进一步在未来形式化结构来源过程。富多型的锆-氧组成空间被用作比较从这些结构演化方法生成的结构训练的 MLIP 性能的严格基准系统。使用 Behler-Parrinello 神经网络作为我们的 MLIP 模型,我们发现 CE 和 DM 搜索在空间描述符多样性和统计准确性方面通常优于 MD。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验