Xu Wenbin, Sanspeur Rohan Yuri, Kolluru Adeesh, Deng Bowen, Harrington Peter, Farrell Steven, Reuter Karsten, Kitchin John R
National Energy Research Scientific Computing Center, Berkeley, CA 94720.
Lawrence Berkeley National Laboratory, Berkeley, CA 94720.
Proc Natl Acad Sci U S A. 2025 Jul 8;122(27):e2422973122. doi: 10.1073/pnas.2422973122. Epub 2025 Jul 1.
The screening and discovery of magnetic materials are hindered by the computational cost of first-principles density-functional theory (DFT) calculations required to find the ground state magnetic ordering. Although universal machine-learning interatomic potentials (uMLIPs), also known as atomistic foundation models, offer high-fidelity models of many atomistic systems with significant speedup, they currently lack the inputs required for predicting magnetic ordering. In this work, we present a data-efficient, spin-informed graph neural network framework that incorporates spin degrees of freedom as inputs and preserves physical symmetries, extending the functionality of uMLIPs to simulate magnetic orderings. This framework speeds up DFT calculations through better initial guesses for magnetic moments, determines the ground-state ordering of bulk materials and even generalizes to magnetic ordering in surfaces. Furthermore, we implement a closed-loop anomaly detection approach that effectively addresses the classic "chicken-and-egg" problem of creating a high-quality dataset while developing a uMLIP, unearthing anomalies in large benchmark datasets and boosting model accuracy.
寻找基态磁有序所需的第一性原理密度泛函理论(DFT)计算的计算成本阻碍了磁性材料的筛选和发现。尽管通用机器学习原子间势(uMLIPs),也称为原子基础模型,能为许多原子系统提供高保真模型并显著加速计算,但目前它们缺乏预测磁有序所需的输入。在这项工作中,我们提出了一个数据高效、自旋感知的图神经网络框架,该框架将自旋自由度作为输入并保留物理对称性,扩展了uMLIPs模拟磁有序的功能。这个框架通过对磁矩的更好初始猜测加速了DFT计算,确定了块状材料的基态有序,甚至推广到了表面的磁有序。此外,我们实施了一种闭环异常检测方法,有效解决了在开发uMLIP时创建高质量数据集的经典“鸡生蛋还是蛋生鸡”问题,挖掘大型基准数据集中的异常并提高模型准确性。