Sun Ping, Yao Xin
Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA), University of Birmingham, Edgbaston, Birmingham B15 2TT, UK.
IEEE Trans Neural Netw. 2010 Jun;21(6):883-94. doi: 10.1109/TNN.2010.2044244. Epub 2010 Apr 19.
Recently, sparse approximation has become a preferred method for learning large scale kernel machines. This technique attempts to represent the solution with only a subset of original data points also known as basis vectors, which are usually chosen one by one with a forward selection procedure based on some selection criteria. The computational complexity of several resultant algorithms scales as O(NM(2)) in time and O(NM) in memory, where N is the number of training points and M is the number of basis vectors as well as the steps of forward selection. For some large scale data sets, to obtain a better solution, we are sometimes required to include more basis vectors, which means that M is not trivial in this situation. However, the limited computational resource (e.g., memory) prevents us from including too many vectors. To handle this dilemma, we propose to add an ensemble of basis vectors instead of only one at each forward step. The proposed method, closely related to gradient boosting, could decrease the required number M of forward steps significantly and thus a large fraction of computational cost is saved. Numerical experiments on three large scale regression tasks and a classification problem demonstrate the effectiveness of the proposed approach.
最近,稀疏逼近已成为学习大规模核机器的首选方法。该技术试图仅用原始数据点的一个子集(也称为基向量)来表示解,这些基向量通常通过基于某些选择标准的前向选择过程逐个选择。几种所得算法的计算复杂度在时间上为O(NM(2)),在内存上为O(NM),其中N是训练点的数量,M是基向量的数量以及前向选择的步数。对于一些大规模数据集,为了获得更好的解,有时需要包含更多的基向量,这意味着在这种情况下M并不小。然而,有限的计算资源(例如内存)阻止我们包含太多向量。为了解决这个困境,我们建议在每个前向步骤中添加一组基向量而不是仅一个。所提出的方法与梯度提升密切相关,可以显著减少所需的前向步骤数M,从而节省很大一部分计算成本。在三个大规模回归任务和一个分类问题上的数值实验证明了所提出方法的有效性。