• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

辛伴随方法:基于神经网络的微分方程的内存高效反向传播

The Symplectic Adjoint Method: Memory-Efficient Backpropagation of Neural-Network-Based Differential Equations.

作者信息

Matsubara Takashi, Miyatake Yuto, Yaguchi Takaharu

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10526-10538. doi: 10.1109/TNNLS.2023.3242345. Epub 2024 Aug 5.

DOI:10.1109/TNNLS.2023.3242345
PMID:37027779
Abstract

The combination of neural networks and numerical integration can provide highly accurate models of continuous-time dynamical systems and probabilistic distributions. However, if a neural network is used n times during numerical integration, the whole computation graph can be considered as a network n times deeper than the original. The backpropagation algorithm consumes memory in proportion to the number of uses times of the network size, causing practical difficulties. This is true even if a checkpointing scheme divides the computation graph into subgraphs. Alternatively, the adjoint method obtains a gradient by a numerical integration backward in time; although this method consumes memory only for single-network use, the computational cost of suppressing numerical errors is high. The symplectic adjoint method proposed in this study, an adjoint method solved by a symplectic integrator, obtains the exact gradient (up to rounding error) with memory proportional to the number of uses plus the network size. The theoretical analysis shows that it consumes much less memory than the naive backpropagation algorithm and checkpointing schemes. The experiments verify the theory, and they also demonstrate that the symplectic adjoint method is faster than the adjoint method and is more robust to rounding errors.

摘要

神经网络与数值积分相结合,可以为连续时间动态系统和概率分布提供高精度模型。然而,若在数值积分过程中使用神经网络n次,整个计算图可被视为比原始网络深n倍的网络。反向传播算法消耗的内存与网络规模的使用次数成正比,这带来了实际困难。即便使用检查点方案将计算图划分为子图,情况依然如此。另外,伴随方法通过时间反向的数值积分来获取梯度;尽管此方法仅在单网络使用时消耗内存,但抑制数值误差的计算成本很高。本研究提出的辛伴随方法,即一种由辛积分器求解的伴随方法,能以与使用次数加网络规模成正比的内存获取精确梯度(在舍入误差范围内)。理论分析表明,它消耗的内存比朴素反向传播算法和检查点方案少得多。实验验证了该理论,同时也表明辛伴随方法比伴随方法更快,并且对舍入误差更具鲁棒性。

相似文献

1
The Symplectic Adjoint Method: Memory-Efficient Backpropagation of Neural-Network-Based Differential Equations.辛伴随方法:基于神经网络的微分方程的内存高效反向传播
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10526-10538. doi: 10.1109/TNNLS.2023.3242345. Epub 2024 Aug 5.
2
The discrete adjoint method for parameter identification in multibody system dynamics.多体系统动力学中参数识别的离散伴随方法。
Multibody Syst Dyn. 2018;42(4):397-410. doi: 10.1007/s11044-017-9600-9. Epub 2017 Nov 3.
3
Event-based backpropagation can compute exact gradients for spiking neural networks.基于事件的反向传播可以为脉冲神经网络计算精确的梯度。
Sci Rep. 2021 Jun 18;11(1):12829. doi: 10.1038/s41598-021-91786-z.
4
Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE.神经常微分方程中用于梯度估计的自适应检查点伴随方法
Proc Mach Learn Res. 2020;119:11639-11649.
5
Hamiltonian neural networks for solving equations of motion.用于求解运动方程的哈密顿神经网络。
Phys Rev E. 2022 Jun;105(6-2):065305. doi: 10.1103/PhysRevE.105.065305.
6
Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction.训练用于单光子发射计算机断层扫描(SPECT)图像重建的端到端展开迭代神经网络。
IEEE Trans Radiat Plasma Med Sci. 2023 Apr;7(4):410-420. doi: 10.1109/trpms.2023.3240934. Epub 2023 Jan 30.
7
Efficient computation of adjoint sensitivities at steady-state in ODE models of biochemical reaction networks.在生化反应网络的 ODE 模型中,对定态的伴随灵敏度进行高效计算。
PLoS Comput Biol. 2023 Jan 3;19(1):e1010783. doi: 10.1371/journal.pcbi.1010783. eCollection 2023 Jan.
8
GPU-Accelerated Adjoint Algorithmic Differentiation.GPU加速的伴随算法微分
Comput Phys Commun. 2016 Mar 1;200:300-311. doi: 10.1016/j.cpc.2015.10.027.
9
Global Optimization of Dielectric Metasurfaces Using a Physics-Driven Neural Network.基于物理驱动神经网络的介电超表面全局优化。
Nano Lett. 2019 Aug 14;19(8):5366-5372. doi: 10.1021/acs.nanolett.9b01857. Epub 2019 Jul 15.
10
Variational data assimilation for the initial-value dynamo problem.初值发电机问题的变分资料同化
Phys Rev E Stat Nonlin Soft Matter Phys. 2011 Nov;84(5 Pt 2):056321. doi: 10.1103/PhysRevE.84.056321. Epub 2011 Nov 23.

引用本文的文献

1
A novel hybrid framework for efficient higher order ODE solvers using neural networks and block methods.一种使用神经网络和块方法的高效高阶常微分方程求解器的新型混合框架。
Sci Rep. 2025 Mar 12;15(1):8456. doi: 10.1038/s41598-025-90556-5.