Suppr超能文献

贝叶斯前馈神经网络的不变性先验

Invariance priors for Bayesian feed-forward neural networks.

作者信息

Toussaint Udo V, Gori Silvio, Dose Volker

机构信息

Centre for Interdisciplinary Plasma Science, Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, D-85748 Garching, Germany.

出版信息

Neural Netw. 2006 Dec;19(10):1550-7. doi: 10.1016/j.neunet.2006.01.017. Epub 2006 Mar 31.

Abstract

Neural networks (NN) are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand, this flexibility can cause overfitting and can hamper the generalization of neural networks. Many approaches to regularizing NN have been suggested but most of them are based on ad hoc arguments. Employing the principle of transformation invariance, we derive a general prior in accordance with the Bayesian probability theory for feed-forward networks. An optimal network is determined by Bayesian model comparison, verifying the applicability of this approach. Additionally the prior presented affords cell pruning.

摘要

神经网络(NN)因其在缺乏足够知识来建立合适模型时对问题具有的有利灵活性而闻名。另一方面,这种灵活性可能导致过拟合,并可能阻碍神经网络的泛化能力。已经提出了许多正则化神经网络的方法,但其中大多数是基于临时的论点。利用变换不变性原理,我们根据贝叶斯概率理论为前馈网络推导了一个通用先验。通过贝叶斯模型比较确定最优网络,验证了该方法的适用性。此外,所提出的先验还提供了单元修剪功能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验