Rubanov N S
Radiophysics Department, Belarussian State University, Minsk, Belarus.
IEEE Trans Neural Netw. 2000;11(2):295-305. doi: 10.1109/72.839001.
Feedforward neural networks (FNN's) have been proposed to solve complex problems in pattern recognition and classification and function approximation. Despite the general success of learning methods for FNN's, such as the backpropagation (BP) algorithm, second-order optimization algorithms and layer-wise learning algorithms, several drawbacks remain to be overcome. In particular, two major drawbacks are convergence to a local minima and long learning time. In this paper we propose an efficient learning method for a FNN that combines the BP strategy and optimization layer by layer. More precisely, we construct the layer-wise optimization method using the Taylor series expansion of nonlinear operators describing a FNN and propose to update weights of each layer by the BP-based Kaczmarz iterative procedure. The experimental results show that the new learning algorithm is stable, it reduces the learning time and demonstrates improvement of generalization results in comparison with other well-known methods.
前馈神经网络(FNN)已被提出用于解决模式识别、分类和函数逼近中的复杂问题。尽管FNN的学习方法取得了普遍成功,如反向传播(BP)算法、二阶优化算法和逐层学习算法,但仍有几个缺点有待克服。特别是,两个主要缺点是收敛到局部最小值和学习时间长。在本文中,我们提出了一种高效的FNN学习方法,该方法将BP策略和逐层优化相结合。更准确地说,我们使用描述FNN的非线性算子的泰勒级数展开构造逐层优化方法,并建议通过基于BP的卡兹马尔兹迭代过程更新每一层的权重。实验结果表明,新的学习算法是稳定的,它减少了学习时间,并与其他知名方法相比,泛化结果有所改善。