Suppr超能文献

BP 神经网络在线梯度法的收敛性分析。

Convergence analysis of online gradient method for BP neural networks.

机构信息

School of Mathematical Sciences, Dalian University of Technology, Dalian, PR China.

出版信息

Neural Netw. 2011 Jan;24(1):91-8. doi: 10.1016/j.neunet.2010.09.007. Epub 2010 Sep 16.

Abstract

This paper considers a class of online gradient learning methods for backpropagation (BP) neural networks with a single hidden layer. We assume that in each training cycle, each sample in the training set is supplied in a stochastic order to the network exactly once. It is interesting that these stochastic learning methods can be shown to be deterministically convergent. This paper presents some weak and strong convergence results for the learning methods, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. The conditions on the activation function and the learning rate to guarantee the convergence are relaxed compared with the existing results. Our convergence results are valid for not only S-S type neural networks (both the output and hidden neurons are Sigmoid functions), but also for P-P, P-S and S-P type neural networks, where S and P represent Sigmoid and polynomial functions, respectively.

摘要

本文考虑了一类具有单个隐藏层的反向传播(BP)神经网络的在线梯度学习方法。我们假设在每个训练周期中,训练集中的每个样本都以随机顺序提供给网络,且每个样本仅提供一次。有趣的是,这些随机学习方法可以被证明是确定性收敛的。本文提出了这些学习方法的一些弱收敛和强收敛结果,分别表明误差函数的梯度趋近于零,权值序列趋近于一个固定点。与现有结果相比,保证收敛的激活函数和学习率的条件得到了放宽。我们的收敛结果不仅适用于 S-S 型神经网络(输出和隐藏神经元都是 Sigmoid 函数),也适用于 P-P、P-S 和 S-P 型神经网络,其中 S 和 P 分别代表 Sigmoid 和多项式函数。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验