Suppr超能文献

用于回归和多类分类的极限学习机。

Extreme learning machine for regression and multiclass classification.

作者信息

Huang Guang-Bin, Zhou Hongming, Ding Xiaojian, Zhang Rui

机构信息

School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798.

出版信息

IEEE Trans Syst Man Cybern B Cybern. 2012 Apr;42(2):513-29. doi: 10.1109/TSMCB.2011.2168604. Epub 2011 Oct 6.

Abstract

Due to the simplicity of their implementations, least square support vector machine (LS-SVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LS-SVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LS-SVM and PSVM have been proposed to handle such cases. This paper shows that both LS-SVM and PSVM can be simplified further and a unified learning framework of LS-SVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the "generalized" single-hidden-layer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LS-SVM and PSVM; 3) in theory, compared to ELM, LS-SVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LS-SVM.

摘要

由于最小二乘支持向量机(LS-SVM)和近端支持向量机(PSVM)实现起来较为简单,它们已在二分类应用中得到广泛使用。传统的LS-SVM和PSVM不能直接用于回归和多分类应用,尽管已经提出了LS-SVM和PSVM的变体来处理此类情况。本文表明,LS-SVM和PSVM都可以进一步简化,并且可以构建一个统一的学习框架,将LS-SVM、PSVM以及其他称为极限学习机(ELM)的正则化算法包含在内。ELM适用于“广义”单隐藏层前馈网络(SLFN),但ELM中的隐藏层(或称为特征映射)无需调整。此类SLFN包括但不限于支持向量机、多项式网络和传统的前馈神经网络。本文展示了以下几点:1)ELM提供了一个具有广泛类型特征映射的统一学习平台,并且可以直接应用于回归和多分类应用;2)从优化方法的角度来看,与LS-SVM和PSVM相比,ELM具有更宽松的优化约束;3)在理论上,与ELM相比,LS-SVM和PSVM获得的是次优解,并且需要更高的计算复杂度;4)在理论上,ELM可以逼近任何目标连续函数并对任何不相交区域进行分类。仿真结果验证了,与传统的支持向量机和LS-SVM相比,ELM往往具有更好的可扩展性,并且能够以快得多的学习速度(高达数千倍)实现相似的(对于回归和二分类情况)或更好得多的(对于多分类情况)泛化性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验