• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

自动微分在训练用于求解微分方程的神经网络中至关重要。

Automatic Differentiation is Essential in Training Neural Networks for Solving Differential Equations.

作者信息

Chen Chuqi, Yang Yahong, Xiang Yang, Hao Wenrui

机构信息

Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong.

Department of Mathematics, The Pennsylvania State University, Pennsylvania, USA.

出版信息

J Sci Comput. 2025 Aug;104(2). doi: 10.1007/s10915-025-02965-3. Epub 2025 Jun 24.

DOI:10.1007/s10915-025-02965-3
PMID:40908979
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12407148/
Abstract

Neural network-based approaches have recently shown significant promise in solving partial differential equations (PDEs) in science and engineering, especially in scenarios featuring complex domains or incorporation of empirical data. One advantage of the neural network methods for PDEs lies in its automatic differentiation (AD), which necessitates only the sample points themselves, unlike traditional finite difference (FD) approximations that require nearby local points to compute derivatives. In this paper, we quantitatively demonstrate the advantage of AD in training neural networks. The concept of truncated entropy is introduced to characterize the training property. Specifically, through comprehensive experimental and theoretical analyses conducted on random feature models and two-layer neural networks, we discover that the defined truncated entropy serves as a reliable metric for quantifying the residual loss of random feature models and the training speed of neural networks for both AD and FD methods. Our experimental and theoretical analyses demonstrate that, from a training perspective, AD outperforms FD in solving PDEs.

摘要

基于神经网络的方法最近在求解科学与工程中的偏微分方程(PDE)方面展现出了巨大潜力,尤其是在具有复杂区域或纳入经验数据的场景中。用于偏微分方程的神经网络方法的一个优势在于其自动微分(AD),它仅需要样本点本身,这与传统的有限差分(FD)近似不同,后者需要附近的局部点来计算导数。在本文中,我们定量地证明了自动微分在训练神经网络中的优势。引入了截断熵的概念来表征训练特性。具体而言,通过对随机特征模型和两层神经网络进行全面的实验和理论分析,我们发现定义的截断熵是量化随机特征模型的残差损失以及自动微分和有限差分方法的神经网络训练速度的可靠指标。我们的实验和理论分析表明,从训练角度来看,在求解偏微分方程时自动微分优于有限差分。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/b0d8de12fb8a/10915_2025_2965_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/a23fa0f6ebe7/10915_2025_2965_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/61429297cc23/10915_2025_2965_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/33e9fef4ae4b/10915_2025_2965_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/1bebfb9c0883/10915_2025_2965_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/828ff43623fd/10915_2025_2965_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/d8cbc4ac2203/10915_2025_2965_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/3fd43d911545/10915_2025_2965_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/f8b3dc11ec2e/10915_2025_2965_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/c893f3cbcfa7/10915_2025_2965_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/61398cf82469/10915_2025_2965_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/d9c3fc646570/10915_2025_2965_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/fb133e7a9db8/10915_2025_2965_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/44880ad3fd9f/10915_2025_2965_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/b0d8de12fb8a/10915_2025_2965_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/a23fa0f6ebe7/10915_2025_2965_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/61429297cc23/10915_2025_2965_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/33e9fef4ae4b/10915_2025_2965_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/1bebfb9c0883/10915_2025_2965_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/828ff43623fd/10915_2025_2965_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/d8cbc4ac2203/10915_2025_2965_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/3fd43d911545/10915_2025_2965_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/f8b3dc11ec2e/10915_2025_2965_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/c893f3cbcfa7/10915_2025_2965_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/61398cf82469/10915_2025_2965_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/d9c3fc646570/10915_2025_2965_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/fb133e7a9db8/10915_2025_2965_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/44880ad3fd9f/10915_2025_2965_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7107/12769632/b0d8de12fb8a/10915_2025_2965_Fig14_HTML.jpg

相似文献

1
Automatic Differentiation is Essential in Training Neural Networks for Solving Differential Equations.自动微分在训练用于求解微分方程的神经网络中至关重要。
J Sci Comput. 2025 Aug;104(2). doi: 10.1007/s10915-025-02965-3. Epub 2025 Jun 24.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
Deep Learning Approaches to Surrogates for Solving the Diffusion Equation for Mechanistic Real-World Simulations.用于解决实际机理模拟扩散方程的替代物的深度学习方法。
Front Physiol. 2021 Jun 24;12:667828. doi: 10.3389/fphys.2021.667828. eCollection 2021.
4
Short-Term Memory Impairment短期记忆障碍
5
Predicting cognitive decline: Deep-learning reveals subtle brain changes in pre-MCI stage.预测认知衰退:深度学习揭示轻度认知障碍前阶段大脑的细微变化。
J Prev Alzheimers Dis. 2025 May;12(5):100079. doi: 10.1016/j.tjpad.2025.100079. Epub 2025 Feb 6.
6
Distilling knowledge from graph neural networks trained on cell graphs to non-neural student models.从在细胞图上训练的图神经网络中提取知识,用于非神经学生模型。
Sci Rep. 2025 Aug 10;15(1):29274. doi: 10.1038/s41598-025-13697-7.
7
Deep learning for automatic ICD coding: Review, opportunities and challenges.用于自动ICD编码的深度学习:综述、机遇与挑战。
Artif Intell Med. 2025 Oct;168:103187. doi: 10.1016/j.artmed.2025.103187. Epub 2025 Jul 10.
8
The use of Open Dialogue in Trauma Informed Care services for mental health consumers and their family networks: A scoping review.创伤知情护理服务中使用开放对话模式为心理健康消费者及其家庭网络提供服务:范围综述。
J Psychiatr Ment Health Nurs. 2024 Aug;31(4):681-698. doi: 10.1111/jpm.13023. Epub 2024 Jan 17.
9
The quantity, quality and findings of network meta-analyses evaluating the effectiveness of GLP-1 RAs for weight loss: a scoping review.评估胰高血糖素样肽-1受体激动剂(GLP-1 RAs)减肥效果的网状Meta分析的数量、质量及结果:一项范围综述
Health Technol Assess. 2025 Jun 25:1-73. doi: 10.3310/SKHT8119.
10
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.在基层医疗机构或医院门诊环境中,如果患者出现以下症状和体征,可判断其是否患有 COVID-19。
Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3.

本文引用的文献

1
Homotopy Relaxation Training Algorithms for Infinite-Width Two-Layer ReLU Neural Networks.无限宽度双层ReLU神经网络的同伦松弛训练算法
J Sci Comput. 2025 Feb;102(2). doi: 10.1007/s10915-024-02761-5. Epub 2025 Jan 3.
2
Newton Informed Neural Operator for Solving Nonlinear Partial Differential Equations.用于求解非线性偏微分方程的牛顿信息神经算子
Adv Neural Inf Process Syst. 2024 Dec;37:120832-120860.
3
Gauss Newton Method for Solving Variational Problems of PDEs with Neural Network Discretizaitons.用于求解具有神经网络离散化的偏微分方程变分问题的高斯牛顿法。
J Sci Comput. 2024 Jul;100(1). doi: 10.1007/s10915-024-02535-z. Epub 2024 Jun 3.
4
Solving high-dimensional partial differential equations using deep learning.使用深度学习解决高维偏微分方程。
Proc Natl Acad Sci U S A. 2018 Aug 21;115(34):8505-8510. doi: 10.1073/pnas.1718942115. Epub 2018 Aug 6.