• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

贝叶斯多项式神经网络和多项式神经常微分方程。

Bayesian polynomial neural networks and polynomial neural ordinary differential equations.

机构信息

Department of Chemical Engineering, University of California, Santa Barbara, California; United States of America.

Department of Statistics and Applied Probability, University of California, Santa Barbara, California; United States of America.

出版信息

PLoS Comput Biol. 2024 Oct 10;20(10):e1012414. doi: 10.1371/journal.pcbi.1012414. eCollection 2024 Oct.

DOI:10.1371/journal.pcbi.1012414
PMID:39388392
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11476690/
Abstract

Symbolic regression with polynomial neural networks and polynomial neural ordinary differential equations (ODEs) are two recent and powerful approaches for equation recovery of many science and engineering problems. However, these methods provide point estimates for the model parameters and are currently unable to accommodate noisy data. We address this challenge by developing and validating the following Bayesian inference methods: the Laplace approximation, Markov Chain Monte Carlo (MCMC) sampling methods, and variational inference. We have found the Laplace approximation to be the best method for this class of problems. Our work can be easily extended to the broader class of symbolic neural networks to which the polynomial neural network belongs.

摘要

基于多项式神经网络和多项式神经网络常微分方程(ODE)的符号回归是最近出现的两种强大的方法,可用于恢复许多科学和工程问题的方程。然而,这些方法提供模型参数的点估计,并且目前无法适应噪声数据。我们通过开发和验证以下贝叶斯推理方法来解决此挑战:拉普拉斯近似,马尔可夫链蒙特卡罗(MCMC)采样方法和变分推理。我们发现拉普拉斯近似是此类问题的最佳方法。我们的工作可以轻松扩展到更广泛的符号神经网络类别,其中包括多项式神经网络。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/c721f62b6aef/pcbi.1012414.g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/d73ed6a409f4/pcbi.1012414.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/91385fb3bdd5/pcbi.1012414.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/658db682cf94/pcbi.1012414.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/7e3170991046/pcbi.1012414.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/d7a3440bc890/pcbi.1012414.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/4ee3314e3008/pcbi.1012414.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/fb081fc2ea10/pcbi.1012414.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/3f68ce857dc9/pcbi.1012414.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/a7d78366975f/pcbi.1012414.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/0dcb22e26b5e/pcbi.1012414.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/76732fa1be21/pcbi.1012414.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/08ac23aaf1f2/pcbi.1012414.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/618c5493af91/pcbi.1012414.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/12540b87a5b6/pcbi.1012414.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/cd41d5cfd6e5/pcbi.1012414.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/da1d0dcd39cc/pcbi.1012414.g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/04fb2ff8faf1/pcbi.1012414.g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/c721f62b6aef/pcbi.1012414.g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/d73ed6a409f4/pcbi.1012414.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/91385fb3bdd5/pcbi.1012414.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/658db682cf94/pcbi.1012414.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/7e3170991046/pcbi.1012414.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/d7a3440bc890/pcbi.1012414.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/4ee3314e3008/pcbi.1012414.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/fb081fc2ea10/pcbi.1012414.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/3f68ce857dc9/pcbi.1012414.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/a7d78366975f/pcbi.1012414.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/0dcb22e26b5e/pcbi.1012414.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/76732fa1be21/pcbi.1012414.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/08ac23aaf1f2/pcbi.1012414.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/618c5493af91/pcbi.1012414.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/12540b87a5b6/pcbi.1012414.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/cd41d5cfd6e5/pcbi.1012414.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/da1d0dcd39cc/pcbi.1012414.g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/04fb2ff8faf1/pcbi.1012414.g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81a7/11476690/c721f62b6aef/pcbi.1012414.g018.jpg

相似文献

1
Bayesian polynomial neural networks and polynomial neural ordinary differential equations.贝叶斯多项式神经网络和多项式神经常微分方程。
PLoS Comput Biol. 2024 Oct 10;20(10):e1012414. doi: 10.1371/journal.pcbi.1012414. eCollection 2024 Oct.
2
Markov chain Monte Carlo inference for Markov jump processes via the linear noise approximation.通过线性噪声逼近对马尔可夫跳跃过程进行马尔可夫链蒙特卡罗推断。
Philos Trans A Math Phys Eng Sci. 2012 Dec 31;371(1984):20110541. doi: 10.1098/rsta.2011.0541. Print 2013 Feb 13.
3
No-U-turn sampling for fast Bayesian inference in ADMB and TMB: Introducing the adnuts and tmbstan R packages.无折返抽样法在 ADMB 和 TMB 中的快速贝叶斯推断:引入 adnuts 和 tmbstan R 包。
PLoS One. 2018 May 24;13(5):e0197954. doi: 10.1371/journal.pone.0197954. eCollection 2018.
4
Efficient Markov chain Monte Carlo methods for decoding neural spike trains.高效的马尔可夫链蒙特卡罗方法用于解码神经尖峰序列。
Neural Comput. 2011 Jan;23(1):46-96. doi: 10.1162/NECO_a_00059. Epub 2010 Oct 21.
5
Variational Bayes inference for hidden Markov diagnostic classification models.隐马尔可夫诊断分类模型的变分贝叶斯推断。
Br J Math Stat Psychol. 2024 Feb;77(1):55-79. doi: 10.1111/bmsp.12308. Epub 2023 May 30.
6
Markov chain Monte Carlo methods for state-space models with point process observations.基于点过程观测的状态空间模型的马尔可夫链蒙特卡罗方法。
Neural Comput. 2012 Jun;24(6):1462-86. doi: 10.1162/NECO_a_00281. Epub 2012 Feb 24.
7
A Probabilistic Framework for Molecular Network Structure Inference by Means of Mechanistic Modeling.基于机理建模的分子网络结构推断的概率框架。
IEEE/ACM Trans Comput Biol Bioinform. 2019 Nov-Dec;16(6):1843-1854. doi: 10.1109/TCBB.2018.2825327. Epub 2018 Apr 10.
8
A simple introduction to Markov Chain Monte-Carlo sampling.马尔可夫链蒙特卡罗采样简介。
Psychon Bull Rev. 2018 Feb;25(1):143-154. doi: 10.3758/s13423-016-1015-8.
9
Markov chain Monte Carlo simulation of a Bayesian mixture model for gene network inference.贝叶斯混合模型的马尔可夫链蒙特卡罗模拟在基因网络推断中的应用。
Genes Genomics. 2019 May;41(5):547-555. doi: 10.1007/s13258-019-00789-8. Epub 2019 Feb 11.
10
Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.适用于人乳头瘤病毒传染病学模型中统计分析的马尔可夫链蒙特卡罗自适应前向投影。
Stat Med. 2013 May 20;32(11):1917-53. doi: 10.1002/sim.5590. Epub 2012 Sep 7.

引用本文的文献

1
Training stiff neural ordinary differential equations with implicit single-step methods.使用隐式单步方法训练刚性神经常微分方程。
Chaos. 2024 Dec 1;34(12). doi: 10.1063/5.0243382.

本文引用的文献

1
Symbolic regression via neural networks.通过神经网络进行符号回归
Chaos. 2023 Aug 1;33(8). doi: 10.1063/5.0134464.
2
Deep Learning and Symbolic Regression for Discovering Parametric Equations.用于发现参数方程的深度学习与符号回归
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16775-16787. doi: 10.1109/TNNLS.2023.3297978. Epub 2024 Oct 29.
3
Interpretable polynomial neural ordinary differential equations.可解释多项式神经常微分方程。
Chaos. 2023 Apr 1;33(4). doi: 10.1063/5.0130803.
4
Sparsifying priors for Bayesian uncertainty quantification in model discovery.模型发现中用于贝叶斯不确定性量化的稀疏先验。
R Soc Open Sci. 2022 Feb 23;9(2):211823. doi: 10.1098/rsos.211823. eCollection 2022 Feb.
5
Stiff neural ordinary differential equations.刚性神经常微分方程。
Chaos. 2021 Sep;31(9):093122. doi: 10.1063/5.0060697.
6
Collocation based training of neural ordinary differential equations.基于搭配的神经常微分方程训练。
Stat Appl Genet Mol Biol. 2021 Jul 9;20(2):37-49. doi: 10.1515/sagmb-2020-0025.
7
Deep Polynomial Neural Networks.深度多项式神经网络。
IEEE Trans Pattern Anal Mach Intell. 2022 Aug;44(8):4021-4034. doi: 10.1109/TPAMI.2021.3058891. Epub 2022 Jul 1.
8
Epidemiological modeling in StochSS Live!StochSS Live! 中的流行病学建模
Bioinformatics. 2021 Sep 9;37(17):2787-2788. doi: 10.1093/bioinformatics/btab061.
9
Autonomous Discovery of Unknown Reaction Pathways from Data by Chemical Reaction Neural Network.基于化学反应神经网络的数据驱动的未知反应路径自主发现。
J Phys Chem A. 2021 Feb 4;125(4):1082-1092. doi: 10.1021/acs.jpca.0c09316. Epub 2021 Jan 20.
10
SINDy-PI: a robust algorithm for parallel implicit sparse identification of nonlinear dynamics.SINDy-PI:一种用于非线性动力学并行隐式稀疏识别的稳健算法。
Proc Math Phys Eng Sci. 2020 Oct;476(2242):20200279. doi: 10.1098/rspa.2020.0279. Epub 2020 Oct 7.