Huynh Hieu Trung, Won Yonggwan, Kim Jung-Ja
Department of Computer Engineering, Chonnam National University, Gwangju, Korea.
Int J Neural Syst. 2008 Oct;18(5):433-41. doi: 10.1142/S0129065708001695.
Recently, a novel learning algorithm called extreme learning machine (ELM) was proposed for efficiently training single-hidden-layer feedforward neural networks (SLFNs). It was much faster than the traditional gradient-descent-based learning algorithms due to the analytical determination of output weights with the random choice of input weights and hidden layer biases. However, this algorithm often requires a large number of hidden units and thus slowly responds to new observations. Evolutionary extreme learning machine (E-ELM) was proposed to overcome this problem; it used the differential evolution algorithm to select the input weights and hidden layer biases. However, this algorithm required much time for searching optimal parameters with iterative processes and was not suitable for data sets with a large number of input features. In this paper, a new approach for training SLFNs is proposed, in which the input weights and biases of hidden units are determined based on a fast regularized least-squares scheme. Experimental results for many real applications with both small and large number of input features show that our proposed approach can achieve good generalization performance with much more compact networks and extremely high speed for both learning and testing.
最近,一种名为极限学习机(ELM)的新型学习算法被提出来用于高效训练单隐层前馈神经网络(SLFN)。由于在随机选择输入权重和隐藏层偏置的情况下通过解析确定输出权重,它比传统的基于梯度下降的学习算法快得多。然而,该算法通常需要大量的隐藏单元,因此对新观测值的响应较慢。为克服这一问题,提出了进化极限学习机(E-ELM);它使用差分进化算法来选择输入权重和隐藏层偏置。然而,该算法需要大量时间通过迭代过程搜索最优参数,并且不适用于具有大量输入特征的数据集。本文提出了一种训练SLFN的新方法,其中基于快速正则化最小二乘方案确定隐藏单元的输入权重和偏置。针对许多具有少量和大量输入特征的实际应用的实验结果表明,我们提出的方法能够以更加紧凑的网络以及极高的学习和测试速度实现良好的泛化性能。