Farenga Nicola, Fresca Stefania, Brivio Simone, Manzoni Andrea
MOX, Department of Mathematics, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan, 20133, Italy.
Neural Netw. 2025 May;185:107146. doi: 10.1016/j.neunet.2025.107146. Epub 2025 Jan 17.
In this work, we present the novel mathematical framework of latent dynamics models (LDMs) for reduced order modeling of parameterized nonlinear time-dependent PDEs. Our framework casts this latter task as a nonlinear dimensionality reduction problem, while constraining the latent state to evolve accordingly to an unknown dynamical system. A time-continuous setting is employed to derive error and stability estimates for the LDM approximation of the full order model (FOM) solution. We analyze the impact of using an explicit Runge-Kutta scheme in the time-discrete setting, resulting in the ΔLDM formulation, and further explore the learnable setting, ΔLDM, where deep neural networks approximate the discrete LDM components, while providing a bounded approximation error with respect to the FOM. Moreover, we extend the concept of parameterized Neural ODE - a possible way to build data-driven dynamical systems with varying input parameters - to be a convolutional architecture, where the input parameters information is injected by means of an affine modulation mechanism, while designing a convolutional autoencoder neural network able to retain spatial-coherence, thus enhancing interpretability at the latent level. Numerical experiments, including the Burgers' and the advection-diffusion-reaction equations, demonstrate the framework's ability to obtain a time-continuous approximation of the FOM solution, thus being able to query the LDM approximation at any given time instance while retaining a prescribed level of accuracy. Our findings highlight the remarkable potential of the proposed LDMs, representing a mathematically rigorous framework to enhance the accuracy and approximation capabilities of reduced order modeling for time-dependent parameterized PDEs.
在这项工作中,我们提出了用于参数化非线性时变偏微分方程降阶建模的潜在动力学模型(LDM)的新型数学框架。我们的框架将后一项任务转化为非线性降维问题,同时约束潜在状态按照一个未知动力系统进行演化。采用时间连续设置来推导全阶模型(FOM)解的LDM近似的误差和稳定性估计。我们分析了在时间离散设置中使用显式龙格 - 库塔方案的影响,从而得到ΔLDM公式,并进一步探索可学习设置ΔLDM,其中深度神经网络近似离散的LDM组件,同时相对于FOM提供有界近似误差。此外,我们将参数化神经常微分方程的概念——一种构建具有变化输入参数的数据驱动动力系统的可能方法——扩展为一种卷积架构,其中输入参数信息通过仿射调制机制注入,同时设计一个能够保持空间相干性的卷积自动编码器神经网络,从而在潜在层面增强可解释性。数值实验,包括伯格斯方程和平流 - 扩散 - 反应方程,证明了该框架能够获得FOM解的时间连续近似,从而能够在任何给定时间实例查询LDM近似,同时保持规定的精度水平。我们的研究结果突出了所提出的LDM的显著潜力,它代表了一个数学上严格的框架,可提高时变参数化偏微分方程降阶建模的准确性和近似能力。