• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

带有软阈值非线性的递归网络,用于轻量级编码。

Recurrent networks with soft-thresholding nonlinearities for lightweight coding.

机构信息

Department of Electrical and Systems Engineering, Washington University in St. Louis, One Brookings Drive, Campus Box 1042, MO 63130, United States; Department of Neurobiology, Harvard Medical School, 220 Longwood Ave, Boston, MA 02115, United States.

Department of Electrical and Systems Engineering, Washington University in St. Louis, One Brookings Drive, Campus Box 1042, MO 63130, United States; Division of Biology and Biomedical Sciences, Washington University in St. Louis, One Brookings Drive, Campus Box 1042, MO 63130, United States.

出版信息

Neural Netw. 2017 Oct;94:212-219. doi: 10.1016/j.neunet.2017.07.008. Epub 2017 Jul 22.

DOI:10.1016/j.neunet.2017.07.008
PMID:28806715
Abstract

A long-standing and influential hypothesis in neural information processing is that early sensory networks adapt themselves to produce efficient codes of afferent inputs. Here, we show how a nonlinear recurrent network provides an optimal solution for the efficient coding of an afferent input and its history. We specifically consider the problem of producing lightweight codes, ones that minimize both ℓ and ℓ constraints on sparsity and energy, respectively. When embedded in a linear coding paradigm, this problem results in a non-smooth convex optimization problem. We employ a proximal gradient descent technique to develop the solution, showing that the optimal code is realized through a recurrent network endowed with a nonlinear soft thresholding operator. The training of the network connection weights is readily achieved through gradient-based local learning. If such learning is assumed to occur on a slower time-scale than the (faster) recurrent dynamics, then the network as a whole converges to an optimal set of codes and weights via what is, in effect, an alternative minimization procedure. Our results show how the addition of thresholding nonlinearities to a recurrent network may enable the production of lightweight, history-sensitive encoding schemes.

摘要

在神经信息处理中,一个长期存在且有影响力的假说认为,早期的感觉网络会自适应地产生有效的传入输入编码。在这里,我们展示了一个非线性递归网络如何为传入输入及其历史提供有效的编码的最优解决方案。我们特别考虑了生成轻量级代码的问题,这些代码分别将稀疏性和能量的 ℓ 和 ℓ 约束最小化。当嵌入在线性编码范例中时,该问题导致非平滑凸优化问题。我们采用近端梯度下降技术来开发解决方案,结果表明,最优代码是通过具有非线性软阈值算子的递归网络实现的。通过基于梯度的局部学习,可以轻松地实现网络连接权重的训练。如果假设这种学习比(更快的)递归动力学发生得更慢,那么整个网络将通过一种有效的替代最小化过程,收敛到最优的代码和权重集。我们的结果表明,向递归网络添加阈值非线性可以实现轻量级、敏感历史的编码方案。

相似文献

1
Recurrent networks with soft-thresholding nonlinearities for lightweight coding.带有软阈值非线性的递归网络,用于轻量级编码。
Neural Netw. 2017 Oct;94:212-219. doi: 10.1016/j.neunet.2017.07.008. Epub 2017 Jul 22.
2
Transformed ℓ regularization for learning sparse deep neural networks.ℓ 正则化变换在稀疏深度神经网络学习中的应用。
Neural Netw. 2019 Nov;119:286-298. doi: 10.1016/j.neunet.2019.08.015. Epub 2019 Aug 27.
3
Selectivity and robustness of sparse coding networks.稀疏编码网络的选择性和鲁棒性。
J Vis. 2020 Nov 2;20(12):10. doi: 10.1167/jov.20.12.10.
4
Multiple Timescale Online Learning Rules for Information Maximization with Energetic Constraints.具有能量约束的信息量最大化的多时间尺度在线学习规则。
Neural Comput. 2019 May;31(5):943-979. doi: 10.1162/neco_a_01182. Epub 2019 Mar 18.
5
Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks.用于非光滑凸优化问题的递归神经网络及其在遗传调控网络识别中的应用。
IEEE Trans Neural Netw. 2011 May;22(5):714-26. doi: 10.1109/TNN.2011.2109735. Epub 2011 Mar 22.
6
Recurrent neural networks of integrate-and-fire cells simulating short-term memory and wrist movement tasks derived from continuous dynamic networks.基于连续动态网络的模拟短期记忆和手腕运动任务的积分发放细胞递归神经网络。
J Physiol Paris. 2003 Jul-Nov;97(4-6):601-12. doi: 10.1016/j.jphysparis.2004.01.017.
7
A novel multivariate performance optimization method based on sparse coding and hyper-predictor learning.一种基于稀疏编码和超预测器学习的新型多变量性能优化方法。
Neural Netw. 2015 Nov;71:45-54. doi: 10.1016/j.neunet.2015.07.011. Epub 2015 Aug 4.
8
Computational analysis of memory capacity in echo state networks.回声状态网络中记忆容量的计算分析。
Neural Netw. 2016 Nov;83:109-120. doi: 10.1016/j.neunet.2016.07.012. Epub 2016 Aug 16.
9
Learning long-term dependencies in NARX recurrent neural networks.在NARX递归神经网络中学习长期依赖关系。
IEEE Trans Neural Netw. 1996;7(6):1329-38. doi: 10.1109/72.548162.
10
A framework for parallel and distributed training of neural networks.一种用于神经网络并行和分布式训练的框架。
Neural Netw. 2017 Jul;91:42-54. doi: 10.1016/j.neunet.2017.04.004. Epub 2017 Apr 19.

引用本文的文献

1
Heterogeneous Forgetting Rates and Greedy Allocation in Slot-Based Memory Networks Promotes Signal Retention.基于时隙的记忆网络中的异质遗忘率和贪婪分配促进信号保留。
Neural Comput. 2024 Apr 23;36(5):1022-1040. doi: 10.1162/neco_a_01655.