• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于核的多层极限学习机的表示学习。

Kernel-Based Multilayer Extreme Learning Machines for Representation Learning.

出版信息

IEEE Trans Neural Netw Learn Syst. 2018 Mar;29(3):757-762. doi: 10.1109/TNNLS.2016.2636834. Epub 2016 Dec 29.

DOI:10.1109/TNNLS.2016.2636834
PMID:28055922
Abstract

Recently, multilayer extreme learning machine (ML-ELM) was applied to stacked autoencoder (SAE) for representation learning. In contrast to traditional SAE, the training time of ML-ELM is significantly reduced from hours to seconds with high accuracy. However, ML-ELM suffers from several drawbacks: 1) manual tuning on the number of hidden nodes in every layer is an uncertain factor to training time and generalization; 2) random projection of input weights and bias in every layer of ML-ELM leads to suboptimal model generalization; 3) the pseudoinverse solution for output weights in every layer incurs relatively large reconstruction error; and 4) the storage and execution time for transformation matrices in representation learning are proportional to the number of hidden layers. Inspired by kernel learning, a kernel version of ML-ELM is developed, namely, multilayer kernel ELM (ML-KELM), whose contributions are: 1) elimination of manual tuning on the number of hidden nodes in every layer; 2) no random projection mechanism so as to obtain optimal model generalization; 3) exact inverse solution for output weights is guaranteed under invertible kernel matrix, resulting to smaller reconstruction error; and 4) all transformation matrices are unified into two matrices only, so that storage can be reduced and may shorten model execution time. Benchmark data sets of different sizes have been employed for the evaluation of ML-KELM. Experimental results have verified the contributions of the proposed ML-KELM. The improvement in accuracy over benchmark data sets is up to 7%.

摘要

最近,多层极限学习机(ML-ELM)被应用于堆叠自编码器(SAE)进行表示学习。与传统的 SAE 相比,ML-ELM 的训练时间从数小时缩短到秒级,并且具有很高的准确性。然而,ML-ELM 存在几个缺点:1)在每一层中手动调整隐藏节点的数量是训练时间和泛化的不确定因素;2)ML-ELM 中每一层的输入权重和偏置的随机投影导致模型泛化效果不佳;3)每一层输出权重的伪逆解会导致相对较大的重构误差;4)表示学习中变换矩阵的存储和执行时间与隐藏层的数量成正比。受核学习的启发,开发了一种 ML-ELM 的核版本,即多层核极限学习机(ML-KELM),其贡献在于:1)消除了每一层中隐藏节点数量的手动调整;2)没有随机投影机制,从而获得最优的模型泛化效果;3)在可逆核矩阵的条件下,保证输出权重的精确逆解,从而减小重构误差;4)所有变换矩阵统一为两个矩阵,从而减少存储并可能缩短模型执行时间。不同大小的基准数据集已被用于评估 ML-KELM。实验结果验证了所提出的 ML-KELM 的贡献。在基准数据集上的准确性提高高达 7%。

相似文献

1
Kernel-Based Multilayer Extreme Learning Machines for Representation Learning.基于核的多层极限学习机的表示学习。
IEEE Trans Neural Netw Learn Syst. 2018 Mar;29(3):757-762. doi: 10.1109/TNNLS.2016.2636834. Epub 2016 Dec 29.
2
Radar HRRP Target Recognition Based on Stacked Autoencoder and Extreme Learning Machine.基于堆叠自编码器和极限学习机的雷达高分辨距离像目标识别
Sensors (Basel). 2018 Jan 10;18(1):173. doi: 10.3390/s18010173.
3
Mixture Correntropy-Based Kernel Extreme Learning Machines.基于混合核相关熵的极限学习机
IEEE Trans Neural Netw Learn Syst. 2022 Feb;33(2):811-825. doi: 10.1109/TNNLS.2020.3029198. Epub 2022 Feb 3.
4
Extreme Learning Machine for Multilayer Perceptron.极限学习机用于多层感知机。
IEEE Trans Neural Netw Learn Syst. 2016 Apr;27(4):809-21. doi: 10.1109/TNNLS.2015.2424995. Epub 2015 May 7.
5
Tuning extreme learning machine by an improved electromagnetism-like mechanism algorithm for classification problem.基于改进的电磁机制算法的极限学习机在分类问题中的调优。
Math Biosci Eng. 2019 May 23;16(5):4692-4707. doi: 10.3934/mbe.2019235.
6
A fast kernel extreme learning machine based on conjugate gradient.基于共轭梯度的快速核极限学习机。
Network. 2018;29(1-4):70-80. doi: 10.1080/0954898X.2018.1562247. Epub 2019 Jan 27.
7
Improving Classification Performance through an Advanced Ensemble Based Heterogeneous Extreme Learning Machines.通过基于高级集成的异构极端学习机提高分类性能。
Comput Intell Neurosci. 2017;2017:3405463. doi: 10.1155/2017/3405463. Epub 2017 May 4.
8
Parsimonious kernel extreme learning machine in primal via Cholesky factorization.基于乔列斯基分解的原始简约核极限学习机
Neural Netw. 2016 Aug;80:95-109. doi: 10.1016/j.neunet.2016.04.009. Epub 2016 May 2.
9
Online Sequential Extreme Learning Machine With Kernels.带核的在线序贯极端学习机。
IEEE Trans Neural Netw Learn Syst. 2015 Sep;26(9):2214-20. doi: 10.1109/TNNLS.2014.2382094. Epub 2014 Dec 31.
10
Multilayer one-class extreme learning machine.多层单类极限学习机。
Neural Netw. 2019 Jul;115:11-22. doi: 10.1016/j.neunet.2019.03.004. Epub 2019 Mar 19.

引用本文的文献

1
Radar-Based Control of a Helical Microswimmer in 3-Dimensional Space with Dynamic Obstacles.基于雷达的三维空间中带有动态障碍物的螺旋微型游动器控制
Cyborg Bionic Syst. 2025 Jun 2;6:0158. doi: 10.34133/cbsystems.0158. eCollection 2025.
2
Cost-sensitive multi-kernel ELM based on reduced expectation kernel auto-encoder.基于约简期望核自动编码器的代价敏感多核极限学习机
PLoS One. 2025 Feb 13;20(2):e0314851. doi: 10.1371/journal.pone.0314851. eCollection 2025.
3
Discrimination of Explosive Residues by Standoff Sensing Using Anodic Aluminum Oxide Microcantilever Laser Absorption Spectroscopy with Kernel-Based Machine Learning.
基于核机器学习的阳极氧化铝微悬臂梁激光吸收光谱远程传感鉴别爆炸物残留
Sensors (Basel). 2024 Sep 10;24(18):5867. doi: 10.3390/s24185867.
4
Approximate solutions to several classes of Volterra and Fredholm integral equations using the neural network algorithm based on the sine-cosine basis function and extreme learning machine.基于正弦-余弦基函数和极限学习机的神经网络算法求解几类Volterra和Fredholm积分方程的近似解
Front Comput Neurosci. 2023 Mar 9;17:1120516. doi: 10.3389/fncom.2023.1120516. eCollection 2023.
5
TSTELM: Two-Stage Transfer Extreme Learning Machine for Unsupervised Domain Adaptation.两阶段迁移极限学习机的无监督域自适应。
Comput Intell Neurosci. 2022 Jul 18;2022:1582624. doi: 10.1155/2022/1582624. eCollection 2022.