Suppr超能文献

通过随机梯度下降加速序贯最小优化。

Accelerating Sequential Minimal Optimization via Stochastic Subgradient Descent.

出版信息

IEEE Trans Cybern. 2021 Apr;51(4):2215-2223. doi: 10.1109/TCYB.2019.2893289. Epub 2021 Mar 17.

Abstract

Sequential minimal optimization (SMO) is one of the most popular methods for solving a variety of support vector machines (SVMs). The shrinking and caching techniques are commonly used to accelerate SMO. An interesting phenomenon of SMO is that most of the computational time is wasted on the first half of iterations for building a good solution closing to the optimal. However, as we all know, the stochastic subgradient descent (SSGD) method is extremely fast for building a good solution. In this paper, we propose a generalized framework of accelerating SMO through SSGD for a variety of SVMs of binary classification, regression, ordinal regression, and so on. We also provide a deep insight about why SSGD can accelerate SMO. Experimental results on a variety of datasets and learning applications confirm that our method can effectively speed up SMO.

摘要

序列最小优化 (SMO) 是解决各种支持向量机 (SVM) 的最流行方法之一。收缩和缓存技术通常用于加速 SMO。SMO 的一个有趣现象是,在构建接近最优的良好解决方案的迭代的前半部分中,大部分计算时间都被浪费了。然而,众所周知,随机梯度下降 (SSGD) 方法对于构建良好的解决方案非常快。在本文中,我们提出了一种通过 SSGD 为二进制分类、回归、有序回归等各种 SVM 加速 SMO 的广义框架。我们还深入探讨了 SSGD 为什么可以加速 SMO。在各种数据集和学习应用程序上的实验结果证实,我们的方法可以有效地加速 SMO。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验