具有隐节点自适应增长的极限学习机的通用逼近。

Universal approximation of extreme learning machine with adaptive growth of hidden nodes.

出版信息

IEEE Trans Neural Netw Learn Syst. 2012 Feb;23(2):365-71. doi: 10.1109/TNNLS.2011.2178124.

Abstract

Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron-like and perform well in both regression and classification applications. In this brief, we propose an ELM with adaptive growth of hidden nodes (AG-ELM), which provides a new approach for the automated design of networks. Different from other incremental ELMs (I-ELMs) whose existing hidden nodes are frozen when the new hidden nodes are added one by one, in AG-ELM the number of hidden nodes is determined in an adaptive way in the sense that the existing networks may be replaced by newly generated networks which have fewer hidden nodes and better generalization performance. We then prove that such an AG-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results demonstrate and verify that this new approach can achieve a more compact network architecture than the I-ELM.

摘要

极限学习机(ELM)已被提出用于广义单隐层前馈网络,其不需要像神经元一样,并且在回归和分类应用中表现良好。在本简讯中,我们提出了一种具有自适应增长隐节点的极限学习机(AG-ELM),为网络的自动化设计提供了一种新方法。与其他增量极限学习机(I-ELM)不同,当逐个添加新的隐节点时,现有隐节点被冻结,而在 AG-ELM 中,隐节点的数量以自适应的方式确定,即现有网络可能被具有更少隐节点和更好泛化性能的新生成网络所取代。然后,我们证明了这种使用勒贝格 p 可积隐激活函数的 AG-ELM 可以在紧凑的输入集上逼近任何勒贝格 p 可积函数。仿真结果表明并验证了这种新方法可以实现比 I-ELM 更紧凑的网络结构。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索