• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

动态递归网络架构的前馈近似

Feedforward Approximations to Dynamic Recurrent Network Architectures.

作者信息

Muir Dylan R

机构信息

Biozentrum, University of Basel, Basel 4056, Switzerland

出版信息

Neural Comput. 2018 Feb;30(2):546-567. doi: 10.1162/neco_a_01042. Epub 2017 Nov 21.

DOI:10.1162/neco_a_01042
PMID:29162003
Abstract

Recurrent neural network architectures can have useful computational properties, with complex temporal dynamics and input-sensitive attractor states. However, evaluation of recurrent dynamic architectures requires solving systems of differential equations, and the number of evaluations required to determine their response to a given input can vary with the input or can be indeterminate altogether in the case of oscillations or instability. In feedforward networks, by contrast, only a single pass through the network is needed to determine the response to a given input. Modern machine learning systems are designed to operate efficiently on feedforward architectures. We hypothesized that two-layer feedforward architectures with simple, deterministic dynamics could approximate the responses of single-layer recurrent network architectures. By identifying the fixed-point responses of a given recurrent network, we trained two-layer networks to directly approximate the fixed-point response to a given input. These feedforward networks then embodied useful computations, including competitive interactions, information transformations, and noise rejection. Our approach was able to find useful approximations to recurrent networks, which can then be evaluated in linear and deterministic time complexity.

摘要

递归神经网络架构具有有用的计算特性,具有复杂的时间动态和输入敏感吸引子状态。然而,对递归动态架构的评估需要求解微分方程组,并且确定它们对给定输入的响应所需的评估次数可能随输入而变化,或者在振荡或不稳定的情况下可能完全不确定。相比之下,在前馈网络中,只需对网络进行一次遍历即可确定对给定输入的响应。现代机器学习系统旨在在前馈架构上高效运行。我们假设具有简单、确定性动态的两层前馈架构可以近似单层递归网络架构的响应。通过识别给定递归网络的定点响应,我们训练两层网络直接近似对给定输入的定点响应。这些前馈网络随后体现了有用的计算,包括竞争交互、信息转换和噪声抑制。我们的方法能够找到递归网络的有用近似,然后可以在线性和确定性时间复杂度内进行评估。

相似文献

1
Feedforward Approximations to Dynamic Recurrent Network Architectures.动态递归网络架构的前馈近似
Neural Comput. 2018 Feb;30(2):546-567. doi: 10.1162/neco_a_01042. Epub 2017 Nov 21.
2
Noise tolerance of attractor and feedforward memory models.吸引子和前馈记忆模型的噪声容忍度。
Neural Comput. 2012 Feb;24(2):332-90. doi: 10.1162/NECO_a_00234. Epub 2011 Nov 17.
3
Network capacity analysis for latent attractor computation.用于潜在吸引子计算的网络容量分析。
Network. 2003 May;14(2):273-302.
4
Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning.比较前馈神经网络和递归神经网络架构与人类在人工语法学习中的行为。
Sci Rep. 2020 Dec 17;10(1):22172. doi: 10.1038/s41598-020-79127-y.
5
Passive dendritic integration heavily affects spiking dynamics of recurrent networks.被动树突整合对递归网络的放电动力学有重大影响。
Neural Netw. 2003 Jun-Jul;16(5-6):657-63. doi: 10.1016/S0893-6080(03)00090-X.
6
Oscillatorylike behavior in feedforward neuronal networks.前馈神经网络中的类振荡行为。
Phys Rev E Stat Nonlin Soft Matter Phys. 2015 Jul;92(1):012703. doi: 10.1103/PhysRevE.92.012703. Epub 2015 Jul 6.
7
Learning in the multiple class random neural network.多类随机神经网络中的学习
IEEE Trans Neural Netw. 2002;13(6):1257-67. doi: 10.1109/TNN.2002.804228.
8
Symmetries and discriminability in feedforward network architectures.前馈网络架构中的对称性与可辨别性。
IEEE Trans Neural Netw. 1993;4(5):816-26. doi: 10.1109/72.248459.
9
Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics.用于情感分类的循环神经网络逆向工程揭示了线性吸引子动力学。
Adv Neural Inf Process Syst. 2019 Dec;32:15696-15705.
10
Synthesis of recurrent neural networks for dynamical system simulation.用于动态系统仿真的递归神经网络的合成。
Neural Netw. 2016 Aug;80:67-78. doi: 10.1016/j.neunet.2016.04.001. Epub 2016 Apr 20.