• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过重新参数化网络模型学习递归神经网络的固定点。

Learning Fixed Points of Recurrent Neural Networks by Reparameterizing the Network Model.

机构信息

Babson College, Mathematics, Analytics, Science, and Technology Division, Wellesley, MA 02481, U.S.A.

University of Notre Dame, Department of Applied and Computational Mathematics and Statistics, Notre Dame, IN 46556, U.S.A.

出版信息

Neural Comput. 2024 Jul 19;36(8):1568-1600. doi: 10.1162/neco_a_01681.

DOI:10.1162/neco_a_01681
PMID:39028956
Abstract

In computational neuroscience, recurrent neural networks are widely used to model neural activity and learning. In many studies, fixed points of recurrent neural networks are used to model neural responses to static or slowly changing stimuli, such as visual cortical responses to static visual stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. In parallel, training fixed points is a central topic in the study of deep equilibrium models in machine learning. A natural approach is to use gradient descent on the Euclidean space of weights. We show that this approach can lead to poor learning performance due in part to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produce more robust learning dynamics. We demonstrate that these learning rules avoid singularities and learn more effectively than standard gradient descent. The new learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.

摘要

在计算神经科学中,递归神经网络被广泛用于模拟神经活动和学习。在许多研究中,递归神经网络的平衡点被用于模拟对静态或缓慢变化的刺激的神经反应,例如视觉皮层对静态视觉刺激的反应。这些应用提出了如何训练递归神经网络的权重以最小化在平衡点上评估的损失函数的问题。同时,训练平衡点是机器学习中深度学习平衡模型研究的一个核心主题。一种自然的方法是在权重的欧几里得空间上使用梯度下降。我们表明,由于损失表面中出现的奇点,这种方法可能导致学习性能不佳。我们使用递归网络模型的重参数化来推导出两种替代的学习规则,它们产生更稳健的学习动态。我们证明这些学习规则避免了奇点,并比标准的梯度下降更有效地学习。新的学习规则可以在递归权重空间上的非欧几里得度量下分别解释为最速下降和梯度下降。我们的结果质疑了一个常见的隐含假设,即大脑中的学习应该遵循突触权重的负欧几里得梯度。

相似文献

1
Learning Fixed Points of Recurrent Neural Networks by Reparameterizing the Network Model.通过重新参数化网络模型学习递归神经网络的固定点。
Neural Comput. 2024 Jul 19;36(8):1568-1600. doi: 10.1162/neco_a_01681.
2
Neural learning rules for generating flexible predictions and computing the successor representation.用于生成灵活预测和计算后继表示的神经学习规则。
Elife. 2023 Mar 16;12:e80680. doi: 10.7554/eLife.80680.
3
Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.优化神经网络在医学数据集上的应用:以新生儿呼吸暂停预测为例的研究
Artif Intell Med. 2019 Jul;98:59-76. doi: 10.1016/j.artmed.2019.07.008. Epub 2019 Jul 25.
4
Learning activation rules rather than connection weights.学习激活规则而非连接权重。
Int J Neural Syst. 1996 May;7(2):129-47. doi: 10.1142/s0129065796000117.
5
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
6
Heterogeneity in Neuronal Dynamics Is Learned by Gradient Descent for Temporal Processing Tasks.神经元动力学的异质性通过梯度下降学习来处理时间处理任务。
Neural Comput. 2023 Mar 18;35(4):555-592. doi: 10.1162/neco_a_01571.
7
Local online learning in recurrent networks with random feedback.具有随机反馈的递归网络中的局部在线学习。
Elife. 2019 May 24;8:e43299. doi: 10.7554/eLife.43299.
8
HybridSNN: Combining Bio-Machine Strengths by Boosting Adaptive Spiking Neural Networks.HybridSNN:通过提升自适应尖峰神经网络来结合生物机器的优势。
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5841-5855. doi: 10.1109/TNNLS.2021.3131356. Epub 2023 Sep 1.
9
Universality of gradient descent neural network training.梯度下降神经网络训练的通用性。
Neural Netw. 2022 Jun;150:259-273. doi: 10.1016/j.neunet.2022.02.016. Epub 2022 Mar 2.
10
Learning smooth dendrite morphological neurons by stochastic gradient descent for pattern classification.通过随机梯度下降学习用于模式分类的平滑树突形态神经元。
Neural Netw. 2023 Nov;168:665-676. doi: 10.1016/j.neunet.2023.09.033. Epub 2023 Sep 25.