Zhao Yong-Ping
Jiangsu Province Key Laboratory of Aerospace Power Systems, College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China.
Neural Netw. 2016 Aug;80:95-109. doi: 10.1016/j.neunet.2016.04.009. Epub 2016 May 2.
Recently, extreme learning machine (ELM) has become a popular topic in machine learning community. By replacing the so-called ELM feature mappings with the nonlinear mappings induced by kernel functions, two kernel ELMs, i.e., P-KELM and D-KELM, are obtained from primal and dual perspectives, respectively. Unfortunately, both P-KELM and D-KELM possess the dense solutions in direct proportion to the number of training data. To this end, a constructive algorithm for P-KELM (CCP-KELM) is first proposed by virtue of Cholesky factorization, in which the training data incurring the largest reductions on the objective function are recruited as significant vectors. To reduce its training cost further, PCCP-KELM is then obtained with the application of a probabilistic speedup scheme into CCP-KELM. Corresponding to CCP-KELM, a destructive P-KELM (CDP-KELM) is presented using a partial Cholesky factorization strategy, where the training data incurring the smallest reductions on the objective function after their removals are pruned from the current set of significant vectors. Finally, to verify the efficacy and feasibility of the proposed algorithms in this paper, experiments on both small and large benchmark data sets are investigated.
最近,极限学习机(ELM)已成为机器学习领域的一个热门话题。通过用核函数诱导的非线性映射替换所谓的ELM特征映射,分别从原始和对偶视角获得了两种核ELM,即P-KELM和D-KELM。不幸的是,P-KELM和D-KELM都具有与训练数据数量成正比的密集解。为此,借助乔列斯基分解首次提出了一种P-KELM的构造算法(CCP-KELM),其中将使目标函数减少最大的训练数据作为重要向量。为了进一步降低其训练成本,随后通过将概率加速方案应用于CCP-KELM得到了PCCP-KELM。与CCP-KELM相对应,使用部分乔列斯基分解策略提出了一种破坏性的P-KELM(CDP-KELM),其中在从当前重要向量集中去除后使目标函数减少最小的训练数据被从该集合中修剪掉。最后,为了验证本文所提算法的有效性和可行性,对小型和大型基准数据集都进行了实验研究。