Suppr超能文献

正则化核方法的定点和坐标下降算法分析

Analysis of fixed-point and coordinate descent algorithms for regularized kernel methods.

作者信息

Dinuzzo Francesco

机构信息

Max Planck Institute for Intelligent Systems, Tübingen 72076, Germany.

出版信息

IEEE Trans Neural Netw. 2011 Oct;22(10):1576-87. doi: 10.1109/TNN.2011.2164096. Epub 2011 Aug 18.

Abstract

In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on coordinate descent, and generalizes some techniques previously proposed for linear support vector machines. It exploits the structure of additively separable loss functions to compute solutions of line searches in closed form. The two methodologies are both very easy to implement. In this paper, we also show how to remove non-differentiability of the objective functional by exactly reformulating a convex regularization problem as an unconstrained differentiable stabilization problem.

摘要

在本文中,我们分析了两类用于具有凸损失函数和二次范数正则化的正则化核方法的优化算法的收敛性。第一种方法是基于定点迭代的一类新算法,这类算法非常适合并行实现,并且可与任何凸损失函数一起使用。第二种方法基于坐标下降,并且推广了先前针对线性支持向量机提出的一些技术。它利用可加可分损失函数的结构以封闭形式计算线搜索的解。这两种方法都非常易于实现。在本文中,我们还展示了如何通过将凸正则化问题精确地重新表述为无约束可微稳定问题来消除目标泛函的不可微性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验