IEEE Trans Cybern. 2015 Feb;45(2):279-88. doi: 10.1109/TCYB.2014.2325594. Epub 2014 Jun 5.
Extreme learning machine (ELM), proposed by Huang et al., was developed for generalized single hidden layer feedforward networks with a wide variety of hidden nodes. ELMs have been proved very fast and effective especially for solving function approximation problems with a predetermined network structure. However, it may contain insignificant hidden nodes. In this paper, we propose dynamic adjustment ELM (DA-ELM) that can further tune the input parameters of insignificant hidden nodes in order to reduce the residual error. It is proved in this paper that the energy error can be effectively reduced by applying recursive expectation-minimization theorem. In DA-ELM, the input parameters of insignificant hidden node are updated in the decreasing direction of the energy error in each step. The detailed theoretical foundation of DA-ELM is presented in this paper. Experimental results show that the proposed DA-ELM is more efficient than the state-of-art algorithms such as Bayesian ELM, optimally-pruned ELM, two-stage ELM, Levenberg-Marquardt, sensitivity-based linear learning method as well as the preliminary ELM.
极限学习机(ELM)由 Huang 等人提出,是一种用于广义单隐层前馈神经网络的方法,具有多种隐节点。ELM 已被证明非常快速有效,特别是对于具有预定网络结构的函数逼近问题。然而,它可能包含不重要的隐节点。在本文中,我们提出了动态调整 ELM(DA-ELM),它可以进一步调整不重要隐节点的输入参数,以减少残差。本文证明了通过应用递归期望最小化定理,可以有效地减少能量误差。在 DA-ELM 中,不重要隐节点的输入参数在每个步骤中沿能量误差的减小方向更新。本文详细介绍了 DA-ELM 的理论基础。实验结果表明,与贝叶斯 ELM、最优修剪 ELM、两阶段 ELM、Levenberg-Marquardt、基于灵敏度的线性学习方法以及初步的 ELM 等最新算法相比,所提出的 DA-ELM 更加高效。