Lai Xiaoping, Cao Jiuwen, Huang Xiaofeng, Wang Tianlei, Lin Zhiping
IEEE Trans Neural Netw Learn Syst. 2020 Jun;31(6):1899-1913. doi: 10.1109/TNNLS.2019.2927385. Epub 2019 Aug 6.
One of the salient features of the extreme learning machine (ELM) is its fast learning speed. However, in a big data environment, the ELM still suffers from an overly heavy computational load due to the high dimensionality and the large amount of data. Using the alternating direction method of multipliers (ADMM), a convex model fitting problem can be split into a set of concurrently executable subproblems, each with just a subset of model coefficients. By maximally splitting across the coefficients and incorporating a novel relaxation technique, a maximally split and relaxed ADMM (MS-RADMM), along with a scalarwise implementation, is developed for the regularized ELM (RELM). The convergence conditions and the convergence rate of the MS-RADMM are established, which exhibits linear convergence with a smaller convergence ratio than the unrelaxed maximally split ADMM. The optimal parameter values of the MS-RADMM are obtained and a fast parameter selection scheme is provided. Experiments on ten benchmark classification data sets are conducted, the results of which demonstrate the fast convergence and parallelism of the MS-RADMM. Complexity comparisons with the matrix-inversion-based method in terms of the numbers of multiplication and addition operations, the computation time and the number of memory cells are provided for performance evaluation of the MS-RADMM.
极限学习机(ELM)的一个显著特点是其快速的学习速度。然而,在大数据环境中,由于数据的高维度和大量性,ELM仍然承受着过重的计算负荷。使用乘子交替方向法(ADMM),一个凸模型拟合问题可以被分解为一组可并行执行的子问题,每个子问题只涉及模型系数的一个子集。通过在系数上进行最大程度的分解并结合一种新颖的松弛技术,为正则化极限学习机(RELM)开发了一种最大程度分解和松弛的ADMM(MS-RADMM)以及一种按标量实现的方法。建立了MS-RADMM的收敛条件和收敛速率,它呈现出线性收敛,且收敛比小于未松弛的最大程度分解ADMM。获得了MS-RADMM的最优参数值并提供了一种快速参数选择方案。在十个基准分类数据集上进行了实验,结果证明了MS-RADMM的快速收敛性和并行性。从乘法和加法运算次数、计算时间和存储单元数量方面与基于矩阵求逆的方法进行了复杂度比较,以对MS-RADMM进行性能评估。