Chen Chuqi, Yang Yahong, Xiang Yang, Hao Wenrui
Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong.
Department of Mathematics, The Pennsylvania State University, Pennsylvania, USA.
J Sci Comput. 2025 Aug;104(2). doi: 10.1007/s10915-025-02965-3. Epub 2025 Jun 24.
Neural network-based approaches have recently shown significant promise in solving partial differential equations (PDEs) in science and engineering, especially in scenarios featuring complex domains or incorporation of empirical data. One advantage of the neural network methods for PDEs lies in its automatic differentiation (AD), which necessitates only the sample points themselves, unlike traditional finite difference (FD) approximations that require nearby local points to compute derivatives. In this paper, we quantitatively demonstrate the advantage of AD in training neural networks. The concept of truncated entropy is introduced to characterize the training property. Specifically, through comprehensive experimental and theoretical analyses conducted on random feature models and two-layer neural networks, we discover that the defined truncated entropy serves as a reliable metric for quantifying the residual loss of random feature models and the training speed of neural networks for both AD and FD methods. Our experimental and theoretical analyses demonstrate that, from a training perspective, AD outperforms FD in solving PDEs.
基于神经网络的方法最近在求解科学与工程中的偏微分方程(PDE)方面展现出了巨大潜力,尤其是在具有复杂区域或纳入经验数据的场景中。用于偏微分方程的神经网络方法的一个优势在于其自动微分(AD),它仅需要样本点本身,这与传统的有限差分(FD)近似不同,后者需要附近的局部点来计算导数。在本文中,我们定量地证明了自动微分在训练神经网络中的优势。引入了截断熵的概念来表征训练特性。具体而言,通过对随机特征模型和两层神经网络进行全面的实验和理论分析,我们发现定义的截断熵是量化随机特征模型的残差损失以及自动微分和有限差分方法的神经网络训练速度的可靠指标。我们的实验和理论分析表明,从训练角度来看,在求解偏微分方程时自动微分优于有限差分。