Kröpfl Fabian, Maier Roland, Peterseim Daniel
Institute of Mathematics, University of Augsburg, Universitätsstr. 12a, 86159 Augsburg, Germany.
Institute of Mathematics, Friedrich Schiller University Jena, Ernst-Abbe-Platz 2, 07743 Jena, Germany.
Adv Contin Discret Model. 2022;2022(1):29. doi: 10.1186/s13662-022-03702-y. Epub 2022 Apr 9.
This paper studies the compression of partial differential operators using neural networks. We consider a family of operators, parameterized by a potentially high-dimensional space of coefficients that may vary on a large range of scales. Based on the existing methods that compress such a multiscale operator to a finite-dimensional sparse surrogate model on a given target scale, we propose to directly approximate the coefficient-to-surrogate map with a neural network. We emulate local assembly structures of the surrogates and thus only require a moderately sized network that can be trained efficiently in an offline phase. This enables large compression ratios and the online computation of a surrogate based on simple forward passes through the network is substantially accelerated compared to classical numerical upscaling approaches. We apply the abstract framework to a family of prototypical second-order elliptic heterogeneous diffusion operators as a demonstrating example.
本文研究了使用神经网络对偏微分算子进行压缩。我们考虑一族算子,其由系数的潜在高维空间参数化,这些系数可能在大范围的尺度上变化。基于现有的将这种多尺度算子压缩到给定目标尺度上的有限维稀疏替代模型的方法,我们建议用神经网络直接逼近系数到替代模型的映射。我们模拟替代模型的局部组装结构,因此只需要一个中等规模的网络,该网络可以在离线阶段有效地进行训练。这实现了大压缩比,并且与经典数值粗化方法相比,基于通过网络的简单前向传播对替代模型进行在线计算得到了显著加速。我们将这个抽象框架应用于一族典型的二阶椭圆型非均匀扩散算子作为示例进行展示。