Suppr超能文献

基于核的多层极限学习机的表示学习。

Kernel-Based Multilayer Extreme Learning Machines for Representation Learning.

出版信息

IEEE Trans Neural Netw Learn Syst. 2018 Mar;29(3):757-762. doi: 10.1109/TNNLS.2016.2636834. Epub 2016 Dec 29.

Abstract

Recently, multilayer extreme learning machine (ML-ELM) was applied to stacked autoencoder (SAE) for representation learning. In contrast to traditional SAE, the training time of ML-ELM is significantly reduced from hours to seconds with high accuracy. However, ML-ELM suffers from several drawbacks: 1) manual tuning on the number of hidden nodes in every layer is an uncertain factor to training time and generalization; 2) random projection of input weights and bias in every layer of ML-ELM leads to suboptimal model generalization; 3) the pseudoinverse solution for output weights in every layer incurs relatively large reconstruction error; and 4) the storage and execution time for transformation matrices in representation learning are proportional to the number of hidden layers. Inspired by kernel learning, a kernel version of ML-ELM is developed, namely, multilayer kernel ELM (ML-KELM), whose contributions are: 1) elimination of manual tuning on the number of hidden nodes in every layer; 2) no random projection mechanism so as to obtain optimal model generalization; 3) exact inverse solution for output weights is guaranteed under invertible kernel matrix, resulting to smaller reconstruction error; and 4) all transformation matrices are unified into two matrices only, so that storage can be reduced and may shorten model execution time. Benchmark data sets of different sizes have been employed for the evaluation of ML-KELM. Experimental results have verified the contributions of the proposed ML-KELM. The improvement in accuracy over benchmark data sets is up to 7%.

摘要

最近,多层极限学习机(ML-ELM)被应用于堆叠自编码器(SAE)进行表示学习。与传统的 SAE 相比,ML-ELM 的训练时间从数小时缩短到秒级,并且具有很高的准确性。然而,ML-ELM 存在几个缺点:1)在每一层中手动调整隐藏节点的数量是训练时间和泛化的不确定因素;2)ML-ELM 中每一层的输入权重和偏置的随机投影导致模型泛化效果不佳;3)每一层输出权重的伪逆解会导致相对较大的重构误差;4)表示学习中变换矩阵的存储和执行时间与隐藏层的数量成正比。受核学习的启发,开发了一种 ML-ELM 的核版本,即多层核极限学习机(ML-KELM),其贡献在于:1)消除了每一层中隐藏节点数量的手动调整;2)没有随机投影机制,从而获得最优的模型泛化效果;3)在可逆核矩阵的条件下,保证输出权重的精确逆解,从而减小重构误差;4)所有变换矩阵统一为两个矩阵,从而减少存储并可能缩短模型执行时间。不同大小的基准数据集已被用于评估 ML-KELM。实验结果验证了所提出的 ML-KELM 的贡献。在基准数据集上的准确性提高高达 7%。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验