Zhu Wei, Zhang Hong-Kun, Kevrekidis P G
Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, Massachusetts 01003-4515, USA.
Phys Rev E. 2023 Aug;108(2):L022301. doi: 10.1103/PhysRevE.108.L022301.
We introduce a methodology for seeking conservation laws within a Hamiltonian dynamical system, which we term "neural deflation." Inspired by deflation methods for steady states of dynamical systems, we propose to iteratively train a number of neural networks to minimize a regularized loss function accounting for the necessity of conserved quantities to be in involution and enforcing functional independence thereof consistently in the infinite-sample limit. The method is applied to a series of integrable and nonintegrable lattice differential-difference equations. In the former, the predicted number of conservation laws extensively grows with the number of degrees of freedom, while for the latter, it generically stops at a threshold related to the number of conserved quantities in the system. This data-driven tool could prove valuable in assessing a model's conserved quantities and its potential integrability.
我们介绍了一种在哈密顿动力系统中寻找守恒律的方法,我们称之为“神经消去法”。受动力系统稳态消去法的启发,我们建议迭代训练多个神经网络,以最小化一个正则化损失函数,该函数考虑了守恒量对合的必要性,并在无限样本极限下一致地强制其函数独立性。该方法应用于一系列可积和不可积的格点微分 - 差分方程。在可积方程中,预测的守恒律数量随着自由度的数量大幅增加,而对于不可积方程,它通常会在与系统中守恒量数量相关的阈值处停止。这种数据驱动的工具在评估模型的守恒量及其潜在可积性方面可能被证明是有价值的。