文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

A learning rule for very simple universal approximators consisting of a single layer of perceptrons.

作者信息

Auer Peter, Burgsteiner Harald, Maass Wolfgang

机构信息

Chair for Information Technology, University of Leoben, Franz-Josef-Strasse 18, A-8700 Leoben, Austria.

出版信息

Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.


DOI:10.1016/j.neunet.2007.12.036
PMID:18249524
Abstract

One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with values in [0,1] if one views the fraction of perceptrons that output 1 as the analog output of the parallel perceptron. Note that in contrast to the familiar model of a "multi-layer perceptron" the parallel perceptron that we consider here has just binary values as outputs of gates on the hidden layer. For a long time one has thought that there exists no competitive learning algorithm for these extremely simple neural networks, which also came to be known as committee machines. It is commonly assumed that one has to replace the hard threshold gates on the hidden layer by sigmoidal gates (or RBF-gates) and that one has to tune the weights on at least two successive layers in order to achieve satisfactory learning results for any class of neural networks that yield universal approximators. We show that this assumption is not true, by exhibiting a simple learning algorithm for parallel perceptrons - the parallel delta rule (p-delta rule). In contrast to backprop for multi-layer perceptrons, the p-delta rule only has to tune a single layer of weights, and it does not require the computation and communication of analog values with high precision. Reduced communication also distinguishes our new learning rule from other learning rules for parallel perceptrons such as MADALINE. Obviously these features make the p-delta rule attractive as a biologically more realistic alternative to backprop in biological neural circuits, but also for implementations in special purpose hardware. We show that the p-delta rule also implements gradient descent-with regard to a suitable error measure-although it does not require to compute derivatives. Furthermore it is shown through experiments on common real-world benchmark datasets that its performance is competitive with that of other learning approaches from neural networks and machine learning. It has recently been shown [Anthony, M. (2007). On the generalization error of fixed combinations of classifiers. Journal of Computer and System Sciences 73(5), 725-734; Anthony, M. (2004). On learning a function of perceptrons. In Proceedings of the 2004 IEEE international joint conference on neural networks (pp. 967-972): Vol. 2] that one can also prove quite satisfactory bounds for the generalization error of this new learning rule.

摘要

相似文献

[1]
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.

Neural Netw. 2008-6

[2]
On the classification capability of sign-constrained perceptrons.

Neural Comput. 2008-1

[3]
A new backpropagation learning algorithm for layered neural networks with nondifferentiable units.

Neural Comput. 2007-5

[4]
Direct parallel perceptrons (DPPs): fast analytical calculation of the parallel perceptrons weights with margin control for classification tasks.

IEEE Trans Neural Netw. 2011-11

[5]
A forecast-based STDP rule suitable for neuromorphic implementation.

Neural Netw. 2012-2-14

[6]
Minimization of error functionals over perceptron networks.

Neural Comput. 2008-1

[7]
Comparison of universal approximators incorporating partial monotonicity by structure.

Neural Netw. 2009-9-17

[8]
On the computational power of threshold circuits with sparse activity.

Neural Comput. 2006-12

[9]
An integral upper bound for neural network approximation.

Neural Comput. 2009-10

[10]
Bounds on the number of hidden neurons in three-layer binary neural networks.

Neural Netw. 2003-9

引用本文的文献

[1]
High-rate leading spikes in propagating spike sequences predict seizure outcome in surgical patients with temporal lobe epilepsy.

Brain Commun. 2023-10-24

[2]
A Cascade BP Neural Network Tuned PID Controller for a High-Voltage Cable-Stripping Robot.

Micromachines (Basel). 2023-3-20

[3]
Machine learning application in personalised lung cancer recurrence and survivability prediction.

Comput Struct Biotechnol J. 2022-4-4

[4]
A Complex-Valued Oscillatory Neural Network for Storage and Retrieval of Multidimensional Aperiodic Signals.

Front Comput Neurosci. 2021-5-24

[5]
Surrogate models based on machine learning methods for parameter estimation of left ventricular myocardium.

R Soc Open Sci. 2021-1-13

[6]
Bio-Inspired Evolutionary Model of Spiking Neural Networks in Ionic Liquid Space.

Front Neurosci. 2019-11-8

[7]
Prediction of the Tensile Response of Carbon Black Filled Rubber Blends by Artificial Neural Network.

Polymers (Basel). 2018-6-9

[8]
Modeling the Temperature Dependence of Dynamic Mechanical Properties and Visco-Elastic Behavior of Thermoplastic Polyurethane Using Artificial Neural Network.

Polymers (Basel). 2017-10-18

[9]
An Oscillatory Neural Autoencoder Based on Frequency Modulation and Multiplexing.

Front Comput Neurosci. 2018-7-10

[10]
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

Neural Comput. 2018-6

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索