Suppr超能文献

动态递归网络架构的前馈近似

Feedforward Approximations to Dynamic Recurrent Network Architectures.

作者信息

Muir Dylan R

机构信息

Biozentrum, University of Basel, Basel 4056, Switzerland

出版信息

Neural Comput. 2018 Feb;30(2):546-567. doi: 10.1162/neco_a_01042. Epub 2017 Nov 21.

Abstract

Recurrent neural network architectures can have useful computational properties, with complex temporal dynamics and input-sensitive attractor states. However, evaluation of recurrent dynamic architectures requires solving systems of differential equations, and the number of evaluations required to determine their response to a given input can vary with the input or can be indeterminate altogether in the case of oscillations or instability. In feedforward networks, by contrast, only a single pass through the network is needed to determine the response to a given input. Modern machine learning systems are designed to operate efficiently on feedforward architectures. We hypothesized that two-layer feedforward architectures with simple, deterministic dynamics could approximate the responses of single-layer recurrent network architectures. By identifying the fixed-point responses of a given recurrent network, we trained two-layer networks to directly approximate the fixed-point response to a given input. These feedforward networks then embodied useful computations, including competitive interactions, information transformations, and noise rejection. Our approach was able to find useful approximations to recurrent networks, which can then be evaluated in linear and deterministic time complexity.

摘要

递归神经网络架构具有有用的计算特性,具有复杂的时间动态和输入敏感吸引子状态。然而,对递归动态架构的评估需要求解微分方程组,并且确定它们对给定输入的响应所需的评估次数可能随输入而变化,或者在振荡或不稳定的情况下可能完全不确定。相比之下,在前馈网络中,只需对网络进行一次遍历即可确定对给定输入的响应。现代机器学习系统旨在在前馈架构上高效运行。我们假设具有简单、确定性动态的两层前馈架构可以近似单层递归网络架构的响应。通过识别给定递归网络的定点响应,我们训练两层网络直接近似对给定输入的定点响应。这些前馈网络随后体现了有用的计算,包括竞争交互、信息转换和噪声抑制。我们的方法能够找到递归网络的有用近似,然后可以在线性和确定性时间复杂度内进行评估。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验