Suppr超能文献

使用具有随机隐藏节点的增量式构造前馈网络的通用逼近

Universal approximation using incremental constructive feedforward networks with random hidden nodes.

作者信息

Huang Guang-Bin, Chen Lei, Siew Chee-Kheong

出版信息

IEEE Trans Neural Netw. 2006 Jul;17(4):879-892. doi: 10.1109/TNN.2006.875977.

Abstract

According to conventional neural network theories, single-hidden-layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with nondifferential activation functions such as threshold networks. Unlike conventional neural network theories, this paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions g : R --> R and the activation functions for RBF nodes can be any integrable piecewise continuous functions g : R --> R and integral of R g(x)dx not equal to 0. The proposed incremental method is efficient not only for SFLNs with continuous (including nondifferentiable) activation functions but also for SLFNs with piecewise continuous (such as threshold) activation functions. Compared to other popular methods such a new network is fully automatic and users need not intervene the learning process by manually tuning control parameters.

摘要

根据传统神经网络理论,当允许单隐层前馈网络(SLFN)的所有参数可调时,具有加法或径向基函数(RBF)隐节点的单隐层前馈网络是通用逼近器。然而,正如在大多数神经网络实现中所观察到的,调整网络的所有参数可能会导致学习变得复杂且效率低下,并且对于具有非微分激活函数(如阈值网络)的网络可能难以训练。与传统神经网络理论不同,本文通过一种增量构造方法证明,为了使SLFN作为通用逼近器工作,人们可以简单地随机选择隐节点,然后只需要调整连接隐层和输出层的输出权重。在这种SLFN实现中,加法节点的激活函数可以是任何有界非恒定分段连续函数g:R→R,RBF节点的激活函数可以是任何可积分段连续函数g:R→R且∫Rg(x)dx≠0。所提出的增量方法不仅对具有连续(包括不可微)激活函数的SFLN有效,而且对具有分段连续(如阈值)激活函数的SLFN也有效。与其他流行方法相比,这样的新网络是完全自动的,用户无需通过手动调整控制参数来干预学习过程。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验