Suppr超能文献

局部递归神经网络的在线学习算法

On-line learning algorithms for locally recurrent neural networks.

作者信息

Campolucci P, Uncini A, Piazza F, Rao B D

机构信息

Dipartimento di Elettronica ed Automatica, Università di Ancona, Ancona, Italy.

出版信息

IEEE Trans Neural Netw. 1999;10(2):253-71. doi: 10.1109/72.750549.

Abstract

This paper focuses on on-line learning procedures for locally recurrent neural networks with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN's). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose on-line version, causal recursive backpropagation (CRBP), presents some advantages with respect to the other on-line training methods. The new CRBP algorithm includes as particular cases backpropagation (BP), temporal backpropagation (TBP), backpropagation for sequences (BPS), Back-Tsoi algorithm among others, thereby providing a unifying view on gradient calculation techniques for recurrent networks with local feedback. The only learning method that has been proposed for locally recurrent networks with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and higher speed of convergence with respect to the Back-Tsoi algorithm, which is supported by the theoretical development and confirmed by simulations. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with the new CRBP method. The simulations show that CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space.

摘要

本文聚焦于局部递归神经网络的在线学习过程,重点关注具有无限脉冲响应(IIR)突触的多层感知器(MLP)及其变体,包括广义输出和激活反馈多层网络(MLN)。我们提出了一种名为递归反向传播(RBP)的基于梯度的新过程,其在线版本,即因果递归反向传播(CRBP),相对于其他在线训练方法具有一些优势。新的CRBP算法包括反向传播(BP)、时间反向传播(TBP)、序列反向传播(BPS)、Back-Tsoi算法等特殊情况,从而为具有局部反馈的递归网络的梯度计算技术提供了统一的视角。针对无架构限制的局部递归网络,唯一被提出的学习方法是Back和Tsoi提出的方法。相对于Back-Tsoi算法,所提出的算法具有更好的稳定性和更高的收敛速度,这得到了理论发展的支持并通过仿真得到证实。CRBP的计算复杂度与Back-Tsoi算法相当,例如,对于通常的架构和参数设置,小于1.5倍。然而,新算法的优越性能很容易证明计算负担的这一小幅增加是合理的。此外,将截断BPTT和RTRL的一般范式应用于具有局部反馈的网络,并与新的CRBP方法进行比较。仿真表明CRBP表现出相似的性能,对复杂度的详细分析表明CRBP要简单得多且易于实现,例如,CRBP在空间和时间上是局部的,而RTRL在空间上不是局部的。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验