Suppr超能文献

使用深度神经网络的算子压缩

Operator compression with deep neural networks.

作者信息

Kröpfl Fabian, Maier Roland, Peterseim Daniel

机构信息

Institute of Mathematics, University of Augsburg, Universitätsstr. 12a, 86159 Augsburg, Germany.

Institute of Mathematics, Friedrich Schiller University Jena, Ernst-Abbe-Platz 2, 07743 Jena, Germany.

出版信息

Adv Contin Discret Model. 2022;2022(1):29. doi: 10.1186/s13662-022-03702-y. Epub 2022 Apr 9.

Abstract

This paper studies the compression of partial differential operators using neural networks. We consider a family of operators, parameterized by a potentially high-dimensional space of coefficients that may vary on a large range of scales. Based on the existing methods that compress such a multiscale operator to a finite-dimensional sparse surrogate model on a given target scale, we propose to directly approximate the coefficient-to-surrogate map with a neural network. We emulate local assembly structures of the surrogates and thus only require a moderately sized network that can be trained efficiently in an offline phase. This enables large compression ratios and the online computation of a surrogate based on simple forward passes through the network is substantially accelerated compared to classical numerical upscaling approaches. We apply the abstract framework to a family of prototypical second-order elliptic heterogeneous diffusion operators as a demonstrating example.

摘要

本文研究了使用神经网络对偏微分算子进行压缩。我们考虑一族算子,其由系数的潜在高维空间参数化,这些系数可能在大范围的尺度上变化。基于现有的将这种多尺度算子压缩到给定目标尺度上的有限维稀疏替代模型的方法,我们建议用神经网络直接逼近系数到替代模型的映射。我们模拟替代模型的局部组装结构,因此只需要一个中等规模的网络,该网络可以在离线阶段有效地进行训练。这实现了大压缩比,并且与经典数值粗化方法相比,基于通过网络的简单前向传播对替代模型进行在线计算得到了显著加速。我们将这个抽象框架应用于一族典型的二阶椭圆型非均匀扩散算子作为示例进行展示。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2ff6/9028012/9da581965d13/13662_2022_3702_Figa_HTML.jpg

相似文献

1
Operator compression with deep neural networks.
Adv Contin Discret Model. 2022;2022(1):29. doi: 10.1186/s13662-022-03702-y. Epub 2022 Apr 9.
2
Basis operator network: A neural network-based model for learning nonlinear operators via neural basis.
Neural Netw. 2023 Jul;164:21-37. doi: 10.1016/j.neunet.2023.04.017. Epub 2023 Apr 20.
3
A General Decoupled Learning Framework for Parameterized Image Operators.
IEEE Trans Pattern Anal Mach Intell. 2021 Jan;43(1):33-47. doi: 10.1109/TPAMI.2019.2925793. Epub 2020 Dec 4.
4
Deep Learning Approaches to Surrogates for Solving the Diffusion Equation for Mechanistic Real-World Simulations.
Front Physiol. 2021 Jun 24;12:667828. doi: 10.3389/fphys.2021.667828. eCollection 2021.
5
Reduced-order modelling numerical homogenization.
Philos Trans A Math Phys Eng Sci. 2014 Aug 6;372(2021). doi: 10.1098/rsta.2013.0388.
8
In-context operator learning with data prompts for differential equation problems.
Proc Natl Acad Sci U S A. 2023 Sep 26;120(39):e2310142120. doi: 10.1073/pnas.2310142120. Epub 2023 Sep 19.
9
On the approximation of bi-Lipschitz maps by invertible neural networks.
Neural Netw. 2024 Jun;174:106214. doi: 10.1016/j.neunet.2024.106214. Epub 2024 Feb 24.
10
Sound propagation in realistic interactive 3D scenes with parameterized sources using deep neural operators.
Proc Natl Acad Sci U S A. 2024 Jan 9;121(2):e2312159120. doi: 10.1073/pnas.2312159120. Epub 2024 Jan 4.

本文引用的文献

1
Solving high-dimensional partial differential equations using deep learning.
Proc Natl Acad Sci U S A. 2018 Aug 21;115(34):8505-8510. doi: 10.1073/pnas.1718942115. Epub 2018 Aug 6.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验