Suppr超能文献

学习遗忘:使用长短期记忆网络进行持续预测。

Learning to forget: continual prediction with LSTM.

作者信息

Gers F A, Schmidhuber J, Cummins F

机构信息

IDSIA, Lugano, Switzerland.

出版信息

Neural Comput. 2000 Oct;12(10):2451-71. doi: 10.1162/089976600300015015.

Abstract

Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive "forget gate" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way.

摘要

长短期记忆网络(LSTM;霍赫雷特和施密德胡伯,1997)能够解决许多传统递归神经网络(RNN)学习算法无法解决的任务。我们发现LSTM网络在处理连续输入流时存在一个弱点,这些输入流并非先验地被分割成带有明确标记结尾的子序列,而在这些结尾处网络的内部状态可以被重置。如果不进行重置,状态可能会无限增长并最终导致网络崩溃。我们的解决方法是一种新颖的自适应“遗忘门”,它使LSTM单元能够学会在适当的时候重置自身,从而释放内部资源。我们回顾了一些具有代表性的基准问题,在这些问题上标准LSTM优于其他RNN算法。所有算法(包括LSTM)都无法解决这些问题的连续版本。然而,带有遗忘门的LSTM能够轻松地解决它们,而且方式很巧妙。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验