School of Mathematical Sciences, Fudan University, Shanghai 200433, China.
School of Mathematical Sciences, Fudan University, Shanghai 200433, China; Shanghai Center for Mathematical Sciences, Fudan University, Shanghai 200438, China; Shanghai Key Laboratory for Contemporary Applied Mathematics, Shanghai 200433, China.
Neural Netw. 2023 Jul;164:21-37. doi: 10.1016/j.neunet.2023.04.017. Epub 2023 Apr 20.
It is widely acknowledged that neural networks can approximate any continuous (even measurable) functions between finite-dimensional Euclidean spaces to arbitrary accuracy. Recently, the use of neural networks has started emerging in infinite-dimensional settings. Universal approximation theorems of operators guarantee that neural networks can learn mappings between infinite-dimensional spaces. In this paper, we propose a neural network-based method (BasisONet) capable of approximating mappings between function spaces. To reduce the dimension of an infinite-dimensional space, we propose a novel function autoencoder that can compress the function data. Our model can predict the output function at any resolution using the corresponding input data at any resolution once trained. Numerical experiments demonstrate that the performance of our model is competitive with existing methods on the benchmarks, and our model can address the data on a complex geometry with high precision. We further analyze some notable characteristics of our model based on the numerical results.
人们普遍认为神经网络可以以任意精度逼近有限维欧几里得空间之间的任何连续(甚至可测)函数。最近,神经网络在无限维环境中的应用开始出现。算子的通用逼近定理保证了神经网络可以学习无限维空间之间的映射。在本文中,我们提出了一种基于神经网络的方法(BasisONet),能够在函数空间之间进行映射逼近。为了降低无限维空间的维数,我们提出了一种新的函数自动编码器,可以压缩函数数据。我们的模型一旦经过训练,就可以使用任何分辨率的相应输入数据在任何分辨率下预测输出函数。数值实验表明,我们的模型在基准测试中的性能与现有方法相当,并且我们的模型可以高精度地处理复杂几何形状的数据。我们进一步根据数值结果分析了我们模型的一些显著特点。