Suppr超能文献

基于凸损失和不定核的学习。

Learning with convex loss and indefinite kernels.

机构信息

School of Statistics, University of International Business and Economics, Beijing 100029, P.R.C.

出版信息

Neural Comput. 2014 Jan;26(1):158-84. doi: 10.1162/NECO_a_00535. Epub 2013 Oct 8.

Abstract

We consider a kind of kernel-based regression with general convex loss functions in a regularization scheme. The kernels used in the scheme are not necessarily symmetric and thus are not positive semidefinite; l(1)-norm of the coefficients in the kernel ensembles is taken as the regularizer. Our setting in this letter is quite different from the classical regularized regression algorithms such as regularized networks and support vector machines regression. Under an established error decomposition that consists of approximation error, hypothesis error, and sample error, we present a detailed mathematical analysis for this scheme and, in particular, its learning rate. A reweighted empirical process theory is applied to the analysis of produced learning algorithms, which plays a key role in deriving the explicit learning rate under some assumptions.

摘要

我们考虑了一种基于正则化方案中一般凸损失函数的核回归。方案中使用的核不必对称,因此不必是半正定的;核集合中系数的 l(1)-范数被用作正则化项。我们在这封信中的设置与经典正则化回归算法(如正则化网络和支持向量机回归)有很大的不同。在由逼近误差、假设误差和样本误差组成的已建立的误差分解的基础上,我们对该方案进行了详细的数学分析,特别是对其学习率进行了分析。重加权经验过程理论被应用于产生的学习算法的分析,这在某些假设下推导出显式学习率中起着关键作用。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验