King Abdullah University for Science and Technology (KAUST), Saudi Arabia.
King Abdullah University for Science and Technology (KAUST), Saudi Arabia.
Neural Netw. 2024 Jul;175:106286. doi: 10.1016/j.neunet.2024.106286. Epub 2024 Apr 8.
Recently, Physics-Informed Neural Networks (PINNs) have gained significant attention for their versatile interpolation capabilities in solving partial differential equations (PDEs). Despite their potential, the training can be computationally demanding, especially for intricate functions like wavefields. This is primarily due to the neural-based (learned) basis functions, biased toward low frequencies, as they are dominated by polynomial calculations, which are not inherently wavefield-friendly. In response, we propose an approach to enhance the efficiency and accuracy of neural network wavefield solutions by modeling them as linear combinations of Gabor basis functions that satisfy the wave equation. Specifically, for the Helmholtz equation, we augment the fully connected neural network model with an adaptable Gabor layer constituting the final hidden layer, employing a weighted summation of these Gabor neurons to compute the predictions (output). These weights/coefficients of the Gabor functions are learned from the previous hidden layers that include nonlinear activation functions. To ensure the Gabor layer's utilization across the model space, we incorporate a smaller auxiliary network to forecast the center of each Gabor function based on input coordinates. Realistic assessments showcase the efficacy of this novel implementation compared to the vanilla PINN, particularly in scenarios involving high-frequencies and realistic models that are often challenging for PINNs.
最近,物理信息神经网络(PINN)因其在解决偏微分方程(PDE)方面的多功能插值能力而受到广泛关注。尽管它们具有潜力,但训练可能需要大量的计算资源,特别是对于像波场这样复杂的函数。这主要是由于基于神经网络的(学习)基函数偏向于低频,因为它们主要由多项式计算主导,而多项式计算本身并不适合波场。针对这个问题,我们提出了一种方法,通过将神经网络波场解建模为满足波动方程的 Gabor 基函数的线性组合,来提高其效率和准确性。具体来说,对于亥姆霍兹方程,我们在全连接神经网络模型中增加了一个可适应的 Gabor 层,作为最终的隐藏层,通过对这些 Gabor 神经元进行加权求和来计算预测值(输出)。这些 Gabor 函数的权重/系数是从包含非线性激活函数的前一层学习得到的。为了确保 Gabor 层在整个模型空间中的使用,我们引入了一个较小的辅助网络,根据输入坐标预测每个 Gabor 函数的中心。实际评估表明,与传统的 PINN 相比,这种新实现方法具有更高的效率,特别是在涉及高频和真实模型的情况下,这些情况通常对 PINN 具有挑战性。