Dinuzzo Francesco
Max Planck Institute for Intelligent Systems, Tübingen 72076, Germany.
IEEE Trans Neural Netw. 2011 Oct;22(10):1576-87. doi: 10.1109/TNN.2011.2164096. Epub 2011 Aug 18.
In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on coordinate descent, and generalizes some techniques previously proposed for linear support vector machines. It exploits the structure of additively separable loss functions to compute solutions of line searches in closed form. The two methodologies are both very easy to implement. In this paper, we also show how to remove non-differentiability of the objective functional by exactly reformulating a convex regularization problem as an unconstrained differentiable stabilization problem.
在本文中,我们分析了两类用于具有凸损失函数和二次范数正则化的正则化核方法的优化算法的收敛性。第一种方法是基于定点迭代的一类新算法,这类算法非常适合并行实现,并且可与任何凸损失函数一起使用。第二种方法基于坐标下降,并且推广了先前针对线性支持向量机提出的一些技术。它利用可加可分损失函数的结构以封闭形式计算线搜索的解。这两种方法都非常易于实现。在本文中,我们还展示了如何通过将凸正则化问题精确地重新表述为无约束可微稳定问题来消除目标泛函的不可微性。