Suppr超能文献

动态递归神经网络的梯度计算:一项综述。

Gradient calculations for dynamic recurrent neural networks: a survey.

作者信息

Pearlmutter B A

机构信息

Learning Syst. Dept., Siemens Corp. Res. Inc., Princeton, NJ.

出版信息

IEEE Trans Neural Netw. 1995;6(5):1212-28. doi: 10.1109/72.410363.

Abstract

Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some "tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed.

摘要

研究具有隐藏单元的递归神经网络的学习算法,并将各种技术置于一个通用框架中。作者讨论了定点学习算法,即递归反向传播和确定性玻尔兹曼机,以及非定点算法,即时间反向传播、埃尔曼的历史截断和乔丹的输出反馈架构。还讨论了前向传播,一种使用伴随方程的在线技术及其变体。在许多情况下,统一的表述会带来各种类型的概括。作者讨论了与时钟驱动神经网络相比,时间连续神经网络的优缺点,并继续介绍了一些用于训练、使用和模拟连续时间递归神经网络的“技巧”。作者给出了一些模拟结果,最后讨论了计算复杂性和学习速度的问题。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验