Suppr超能文献

用于支持向量机训练的并行序列最小优化

Parallel sequential minimal optimization for the training of support vector machines.

作者信息

Cao L J, Keerthi S S, Ong Chong-Jin, Zhang J Q, Periyathamby Uvaraj, Fu Xiu Ju, Lee H P

机构信息

Financial Studies, Fudan University, ShangHai, PR China.

出版信息

IEEE Trans Neural Netw. 2006 Jul;17(4):1039-49. doi: 10.1109/TNN.2006.875989.

Abstract

Sequential minimal optimization (SMO) is one popular algorithm for training support vector machine (SVM), but it still requires a large amount of computation time for solving large size problems. This paper proposes one parallel implementation of SMO for training SVM. The parallel SMO is developed using message passing interface (MPI). Specifically, the parallel SMO first partitions the entire training data set into smaller subsets and then simultaneously runs multiple CPU processors to deal with each of the partitioned data sets. Experiments show that there is great speedup on the adult data set and the Mixing National Institute of Standard and Technology (MNIST) data set when many processors are used. There are also satisfactory results on the Web data set.

摘要

序列最小优化(SMO)是一种用于训练支持向量机(SVM)的流行算法,但对于解决大规模问题仍需要大量计算时间。本文提出了一种用于训练SVM的SMO并行实现。并行SMO是使用消息传递接口(MPI)开发的。具体来说,并行SMO首先将整个训练数据集划分为较小的子集,然后同时运行多个CPU处理器来处理每个划分后的数据集。实验表明,在使用多个处理器时,成人数据集和混合国家标准与技术研究所(MNIST)数据集有很大的加速比。在网页数据集上也有令人满意的结果。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验