School of Statistics, University of International Business and Economics, Beijing 100029, P.R.C.
Neural Comput. 2014 Jan;26(1):158-84. doi: 10.1162/NECO_a_00535. Epub 2013 Oct 8.
We consider a kind of kernel-based regression with general convex loss functions in a regularization scheme. The kernels used in the scheme are not necessarily symmetric and thus are not positive semidefinite; l(1)-norm of the coefficients in the kernel ensembles is taken as the regularizer. Our setting in this letter is quite different from the classical regularized regression algorithms such as regularized networks and support vector machines regression. Under an established error decomposition that consists of approximation error, hypothesis error, and sample error, we present a detailed mathematical analysis for this scheme and, in particular, its learning rate. A reweighted empirical process theory is applied to the analysis of produced learning algorithms, which plays a key role in deriving the explicit learning rate under some assumptions.
我们考虑了一种基于正则化方案中一般凸损失函数的核回归。方案中使用的核不必对称,因此不必是半正定的;核集合中系数的 l(1)-范数被用作正则化项。我们在这封信中的设置与经典正则化回归算法(如正则化网络和支持向量机回归)有很大的不同。在由逼近误差、假设误差和样本误差组成的已建立的误差分解的基础上,我们对该方案进行了详细的数学分析,特别是对其学习率进行了分析。重加权经验过程理论被应用于产生的学习算法的分析,这在某些假设下推导出显式学习率中起着关键作用。