IEEE Trans Neural Netw Learn Syst. 2012 Sep;23(9):1498-505. doi: 10.1109/TNNLS.2012.2202289.
It is clear that the learning effectiveness and learning speed of neural networks are in general far slower than required, which has been a major bottleneck for many applications. Recently, a simple and efficient learning method, referred to as extreme learning machine (ELM), was proposed by Huang , which has shown that, compared to some conventional methods, the training time of neural networks can be reduced by a thousand times. However, one of the open problems in ELM research is whether the number of hidden nodes can be further reduced without affecting learning effectiveness. This brief proposes a new learning algorithm, called bidirectional extreme learning machine (B-ELM), in which some hidden nodes are not randomly selected. In theory, this algorithm tends to reduce network output error to 0 at an extremely early learning stage. Furthermore, we find a relationship between the network output error and the network output weights in the proposed B-ELM. Simulation results demonstrate that the proposed method can be tens to hundreds of times faster than other incremental ELM algorithms.
显然,神经网络的学习效果和学习速度通常都远低于要求,这一直是许多应用的主要瓶颈。最近,Huang 提出了一种简单而有效的学习方法,称为极限学习机(ELM),它表明与一些传统方法相比,神经网络的训练时间可以减少千倍。然而,ELM 研究中的一个开放性问题是,在不影响学习效果的情况下,隐藏节点的数量是否可以进一步减少。本文提出了一种新的学习算法,称为双向极限学习机(B-ELM),其中一些隐藏节点不是随机选择的。从理论上讲,该算法在极早期的学习阶段就倾向于将网络输出误差降低到 0。此外,我们还发现了所提出的 B-ELM 中网络输出误差和网络输出权重之间的关系。仿真结果表明,所提出的方法可以比其他增量 ELM 算法快数十到数百倍。