Suppr超能文献

基于神经网络的经验势中的表示。

Representations in neural network based empirical potentials.

机构信息

Department of Physics and School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA.

出版信息

J Chem Phys. 2017 Jul 14;147(2):024104. doi: 10.1063/1.4990503.

Abstract

Many structural and mechanical properties of crystals, glasses, and biological macromolecules can be modeled from the local interactions between atoms. These interactions ultimately derive from the quantum nature of electrons, which can be prohibitively expensive to simulate. Machine learning has the potential to revolutionize materials modeling due to its ability to efficiently approximate complex functions. For example, neural networks can be trained to reproduce results of density functional theory calculations at a much lower cost. However, how neural networks reach their predictions is not well understood, which has led to them being used as a "black box" tool. This lack of understanding is not desirable especially for applications of neural networks in scientific inquiry. We argue that machine learning models trained on physical systems can be used as more than just approximations since they had to "learn" physical concepts in order to reproduce the labels they were trained on. We use dimensionality reduction techniques to study in detail the representation of silicon atoms at different stages in a neural network, which provides insight into how a neural network learns to model atomic interactions.

摘要

许多晶体、玻璃和生物大分子的结构和力学性能可以通过原子之间的局部相互作用来建模。这些相互作用最终来源于电子的量子性质,而电子的模拟成本可能非常高。由于其能够高效地逼近复杂函数,机器学习有可能彻底改变材料建模。例如,可以训练神经网络以更低的成本再现密度泛函理论计算的结果。然而,神经网络如何得出其预测结果还不是很清楚,这导致它们被用作“黑箱”工具。这种缺乏理解对于神经网络在科学探究中的应用来说是不理想的。我们认为,在物理系统上训练的机器学习模型不仅仅可以作为近似值,因为它们为了重现它们所训练的标签,必须“学习”物理概念。我们使用降维技术详细研究了神经网络中不同阶段的硅原子的表示形式,这为我们深入了解神经网络如何学习模拟原子相互作用提供了思路。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验