Suppr超能文献

从机器学习视角看张量核上的混合精度费米子算符展开

Mixed Precision Fermi-Operator Expansion on Tensor Cores from a Machine Learning Perspective.

作者信息

Finkelstein Joshua, Smith Justin S, Mniszewski Susan M, Barros Kipton, Negre Christian F A, Rubensson Emanuel H, Niklasson Anders M N

机构信息

Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States.

Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States.

出版信息

J Chem Theory Comput. 2021 Apr 13;17(4):2256-2265. doi: 10.1021/acs.jctc.1c00057. Epub 2021 Apr 2.

Abstract

We present a second-order recursive Fermi-operator expansion scheme using mixed precision floating point operations to perform electronic structure calculations using tensor core units. A performance of over 100 teraFLOPs is achieved for half-precision floating point operations on Nvidia's A100 tensor core units. The second-order recursive Fermi-operator scheme is formulated in terms of a generalized, differentiable deep neural network structure, which solves the quantum mechanical electronic structure problem. We demonstrate how this network can be accelerated by optimizing the weight and bias values to substantially reduce the number of layers required for convergence. We also show how this machine learning approach can be used to optimize the coefficients of the recursive Fermi-operator expansion to accurately represent the fractional occupation numbers of the electronic states at finite temperatures.

摘要

我们提出了一种二阶递归费米算符展开方案,该方案使用混合精度浮点运算,通过张量核心单元进行电子结构计算。在英伟达的A100张量核心单元上,半精度浮点运算的性能超过了100万亿次浮点运算。二阶递归费米算符方案是根据广义的、可微的深度神经网络结构制定的,该结构解决了量子力学电子结构问题。我们展示了如何通过优化权重和偏差值来加速这个网络,从而大幅减少收敛所需的层数。我们还展示了这种机器学习方法如何用于优化递归费米算符展开的系数,以准确表示有限温度下电子态的分数占据数。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验