IEEE Trans Cybern. 2013 Dec;43(6):2054-65. doi: 10.1109/TCYB.2013.2239987.
Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron alike and perform well in both regression and classification applications. The problem of determining the suitable network architectures is recognized to be crucial in the successful application of ELMs. This paper first proposes a dynamic ELM (D-ELM) where the hidden nodes can be recruited or deleted dynamically according to their significance to network performance, so that not only the parameters can be adjusted but also the architecture can be self-adapted simultaneously. Then, this paper proves in theory that such D-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results obtained over various test problems demonstrate and verify that the proposed D-ELM does a good job reducing the network size while preserving good generalization performance.
极限学习机(ELM)已经被提出用于广义单隐层前馈网络,这些网络不需要像神经元那样,并且在回归和分类应用中表现良好。确定合适的网络架构的问题被认为是 ELM 成功应用的关键。本文首先提出了一种动态 ELM(D-ELM),其中隐藏节点可以根据其对网络性能的重要性动态地招募或删除,从而不仅可以调整参数,还可以同时自适应架构。然后,本文从理论上证明,使用勒贝格 p 可积隐式激活函数的这种 D-ELM 可以在紧凑的输入集上逼近任何勒贝格 p 可积函数。在各种测试问题上获得的仿真结果表明并验证了所提出的 D-ELM 在减小网络规模的同时保持了良好的泛化性能。