• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

无无噪基线的节点扰动学习。

Node perturbation learning without noiseless baseline.

机构信息

Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa, Chiba 277-8561, Japan.

出版信息

Neural Netw. 2011 Apr;24(3):267-72. doi: 10.1016/j.neunet.2010.12.001. Epub 2010 Dec 9.

DOI:10.1016/j.neunet.2010.12.001
PMID:21193286
Abstract

Node perturbation learning is a stochastic gradient descent method for neural networks. It estimates the gradient by comparing an evaluation of the perturbed output and the unperturbed output performance, which we call the baseline. Node perturbation learning has primarily been investigated without taking noise on the baseline into consideration. In real biological systems, however, neural activities are intrinsically noisy, and hence, the baseline is likely contaminated with the noise. In this paper, we propose an alternative learning method that does not require such a noiseless baseline. Our method uses a "second perturbation", which is calculated with different noise than the first perturbation. By comparing the evaluation of the outcomes with the first perturbation and with the second perturbation, the network weights are updated. We reveal that the learning speed showed only a linear decrease with the variance of the second perturbation. Moreover, using the second perturbation can lead to a decrease in residual error compared to the case of using the noiseless baseline.

摘要

节点扰动学习是一种用于神经网络的随机梯度下降方法。它通过比较扰动输出和未扰动输出性能的评估来估计梯度,我们称之为基线。然而,节点扰动学习主要是在不考虑基线噪声的情况下进行研究的。然而,在实际的生物系统中,神经活动是固有噪声的,因此,基线很可能受到噪声的污染。在本文中,我们提出了一种不需要无噪声基线的替代学习方法。我们的方法使用“二次扰动”,它是用不同于第一次扰动的噪声计算的。通过比较结果的评估与第一次扰动和第二次扰动,更新网络权重。我们揭示了学习速度仅随第二次扰动方差的线性下降。此外,与使用无噪声基线的情况相比,使用二次扰动可以导致残差的减少。

相似文献

1
Node perturbation learning without noiseless baseline.无无噪基线的节点扰动学习。
Neural Netw. 2011 Apr;24(3):267-72. doi: 10.1016/j.neunet.2010.12.001. Epub 2010 Dec 9.
2
Learning curves for stochastic gradient descent in linear feedforward networks.线性前馈网络中随机梯度下降的学习曲线。
Neural Comput. 2005 Dec;17(12):2699-718. doi: 10.1162/089976605774320539.
3
Optimal node perturbation in linear perceptrons with uncertain eligibility trace.具有不确定候补迹的线性感知器中的最优节点扰动。
Neural Netw. 2010 Mar;23(2):219-25. doi: 10.1016/j.neunet.2009.11.013. Epub 2009 Dec 2.
4
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
5
Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks.三类分裂复梯度算法在复值递归神经网络中的收敛性分析。
Neural Comput. 2010 Oct;22(10):2655-77. doi: 10.1162/NECO_a_00021.
6
Propagation and control of stochastic signals through universal learning networks.随机信号通过通用学习网络的传播与控制。
Neural Netw. 2006 May;19(4):487-99. doi: 10.1016/j.neunet.2005.10.005. Epub 2006 Jan 18.
7
An H(∞) control approach to robust learning of feedforward neural networks.H(∞) 控制方法在前馈神经网络鲁棒学习中的应用。
Neural Netw. 2011 Sep;24(7):759-66. doi: 10.1016/j.neunet.2011.03.015. Epub 2011 Mar 14.
8
Polynomial harmonic GMDH learning networks for time series modeling.用于时间序列建模的多项式谐波GMDH学习网络
Neural Netw. 2003 Dec;16(10):1527-40. doi: 10.1016/S0893-6080(03)00188-6.
9
Analysis and improvement of policy gradient estimation.政策梯度估计的分析与改进。
Neural Netw. 2012 Feb;26:118-29. doi: 10.1016/j.neunet.2011.09.005. Epub 2011 Oct 1.
10
Improving generalization performance of natural gradient learning using optimized regularization by NIC.使用NIC优化正则化提高自然梯度学习的泛化性能。
Neural Comput. 2004 Feb;16(2):355-82. doi: 10.1162/089976604322742065.