Suppr超能文献

一种快速的硬支持向量回归近似训练方法。

A fast method to approximately train hard support vector regression.

机构信息

ZNDY of Ministerial Key Laboratory, Nanjing University of Science & Technology, Nanjing, China.

出版信息

Neural Netw. 2010 Dec;23(10):1276-85. doi: 10.1016/j.neunet.2010.08.001. Epub 2010 Aug 10.

Abstract

The hard support vector regression (HSVR) usually has a risk of suffering from overfitting due to the presence of noise. The main reason is that it does not utilize the regularization technique to set an upper bound on the Lagrange multipliers so they can be magnified infinitely. Hence, we propose a greedy stagewise based algorithm to approximately train HSVR. At each iteration, the sample which has the maximal predicted discrepancy is selected and its weight is updated only once so as to avoid being excessively magnified. Actually, this early stopping rule can implicitly control the capacity of the regression machine, which is equivalent to a regularization technique. In addition, compared with the well-known software LIBSVM2.82, our algorithm to a certain extent has advantages in both the training time and the number of support vectors. Finally, experimental results on the synthetic and real-world benchmark data sets also corroborate the efficacy of the proposed algorithm.

摘要

硬支持向量回归(HSVR)通常由于噪声的存在而存在过拟合的风险。主要原因是它没有利用正则化技术对拉格朗日乘子设置上限,因此它们可以被无限放大。因此,我们提出了一种贪婪的分阶段算法来近似训练 HSVR。在每次迭代中,选择具有最大预测差异的样本,并且仅更新其权重一次,以避免被过度放大。实际上,这种提前停止规则可以隐式地控制回归机的容量,这相当于一种正则化技术。此外,与著名的软件 LIBSVM2.82 相比,我们的算法在训练时间和支持向量的数量上都有一定的优势。最后,对合成和真实基准数据集的实验结果也证实了所提出算法的有效性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验