• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

正则化核方法的定点和坐标下降算法分析

Analysis of fixed-point and coordinate descent algorithms for regularized kernel methods.

作者信息

Dinuzzo Francesco

机构信息

Max Planck Institute for Intelligent Systems, Tübingen 72076, Germany.

出版信息

IEEE Trans Neural Netw. 2011 Oct;22(10):1576-87. doi: 10.1109/TNN.2011.2164096. Epub 2011 Aug 18.

DOI:10.1109/TNN.2011.2164096
PMID:21859617
Abstract

In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on coordinate descent, and generalizes some techniques previously proposed for linear support vector machines. It exploits the structure of additively separable loss functions to compute solutions of line searches in closed form. The two methodologies are both very easy to implement. In this paper, we also show how to remove non-differentiability of the objective functional by exactly reformulating a convex regularization problem as an unconstrained differentiable stabilization problem.

摘要

在本文中,我们分析了两类用于具有凸损失函数和二次范数正则化的正则化核方法的优化算法的收敛性。第一种方法是基于定点迭代的一类新算法,这类算法非常适合并行实现,并且可与任何凸损失函数一起使用。第二种方法基于坐标下降,并且推广了先前针对线性支持向量机提出的一些技术。它利用可加可分损失函数的结构以封闭形式计算线搜索的解。这两种方法都非常易于实现。在本文中,我们还展示了如何通过将凸正则化问题精确地重新表述为无约束可微稳定问题来消除目标泛函的不可微性。

相似文献

1
Analysis of fixed-point and coordinate descent algorithms for regularized kernel methods.正则化核方法的定点和坐标下降算法分析
IEEE Trans Neural Netw. 2011 Oct;22(10):1576-87. doi: 10.1109/TNN.2011.2164096. Epub 2011 Aug 18.
2
Nonlinear regularization path for quadratic loss support vector machines.二次损失支持向量机的非线性正则化路径
IEEE Trans Neural Netw. 2011 Oct;22(10):1613-25. doi: 10.1109/TNN.2011.2164265. Epub 2011 Aug 30.
3
Design of a multiple kernel learning algorithm for LS-SVM by convex programming.基于凸规划的 LS-SVM 多核学习算法设计。
Neural Netw. 2011 Jun;24(5):476-83. doi: 10.1016/j.neunet.2011.03.009. Epub 2011 Mar 12.
4
Multiconlitron: a general piecewise linear classifier.多约束线性分类器:一种通用的分段线性分类器。
IEEE Trans Neural Netw. 2011 Feb;22(2):276-89. doi: 10.1109/TNN.2010.2094624. Epub 2010 Dec 6.
5
Direct Kernel Perceptron (DKP): ultra-fast kernel ELM-based classification with non-iterative closed-form weight calculation.直接核感知机(DKP):基于超快速核极限学习机的分类方法,具有非迭代的闭式权重计算。
Neural Netw. 2014 Feb;50:60-71. doi: 10.1016/j.neunet.2013.11.002. Epub 2013 Nov 14.
6
The linear separability problem: some testing methods.线性可分性问题:一些测试方法。
IEEE Trans Neural Netw. 2006 Mar;17(2):330-44. doi: 10.1109/TNN.2005.860871.
7
A recurrent neural network with exponential convergence for solving convex quadratic program and related linear piecewise equations.一种具有指数收敛性的递归神经网络,用于求解凸二次规划及相关线性分段方程。
Neural Netw. 2004 Sep;17(7):1003-15. doi: 10.1016/j.neunet.2004.05.006.
8
Efficient sparse generalized multiple kernel learning.高效稀疏广义多核学习
IEEE Trans Neural Netw. 2011 Mar;22(3):433-46. doi: 10.1109/TNN.2010.2103571. Epub 2011 Jan 20.
9
Hidden space support vector machines.隐空间支持向量机
IEEE Trans Neural Netw. 2004 Nov;15(6):1424-34. doi: 10.1109/TNN.2004.831161.
10
Global convergence of SMO algorithm for support vector regression.支持向量回归的SMO算法全局收敛性
IEEE Trans Neural Netw. 2008 Jun;19(6):971-82. doi: 10.1109/TNN.2007.915116.