Department of Informatics, School of Computation, Information and Technology, Technical University of Munich, 80333 Munich, Germany.
Faculty of Computational Mathematics and Cybernetics, Moscow State University, 119991 Moscow, Russia.
Chaos. 2023 Feb;33(2):023121. doi: 10.1063/5.0113632.
We identify effective stochastic differential equations (SDEs) for coarse observables of fine-grained particle- or agent-based simulations; these SDEs then provide useful coarse surrogate models of the fine scale dynamics. We approximate the drift and diffusivity functions in these effective SDEs through neural networks, which can be thought of as effective stochastic ResNets. The loss function is inspired by, and embodies, the structure of established stochastic numerical integrators (here, Euler-Maruyama and Milstein); our approximations can thus benefit from backward error analysis of these underlying numerical schemes. They also lend themselves naturally to "physics-informed" gray-box identification when approximate coarse models, such as mean field equations, are available. Existing numerical integration schemes for Langevin-type equations and for stochastic partial differential equations can also be used for training; we demonstrate this on a stochastically forced oscillator and the stochastic wave equation. Our approach does not require long trajectories, works on scattered snapshot data, and is designed to naturally handle different time steps per snapshot. We consider both the case where the coarse collective observables are known in advance, as well as the case where they must be found in a data-driven manner.
我们为细粒度的基于粒子或基于主体的模拟的细观可观测量确定有效的随机微分方程(SDE);这些 SDE 然后为精细尺度动力学提供有用的粗替代模型。我们通过神经网络来近似这些有效 SDE 中的漂移和扩散函数,这些神经网络可以被视为有效的随机 ResNets。损失函数的灵感来自于已建立的随机数值积分器(这里是 Euler-Maruyama 和 Milstein)的结构;因此,我们的近似值可以受益于这些基本数值方案的向后误差分析。当存在近似的粗模型(例如平均场方程)时,它们也自然适合“物理信息”的灰色盒识别。现有的 Langevin 型方程和随机偏微分方程的数值积分方案也可用于训练;我们在受随机力驱动的振荡器和随机波动方程上演示了这一点。我们的方法不需要长轨迹,适用于稀疏的快照数据,并且旨在自然处理每个快照的不同时间步长。我们考虑了预先知道粗集体可观测量的情况,以及必须以数据驱动方式找到它们的情况。