• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

图嵌入极限学习机。

Graph Embedded Extreme Learning Machine.

出版信息

IEEE Trans Cybern. 2016 Jan;46(1):311-24. doi: 10.1109/TCYB.2015.2401973. Epub 2015 Mar 2.

DOI:10.1109/TCYB.2015.2401973
PMID:25751883
Abstract

In this paper, we propose a novel extension of the extreme learning machine (ELM) algorithm for single-hidden layer feedforward neural network training that is able to incorporate subspace learning (SL) criteria on the optimization process followed for the calculation of the network's output weights. The proposed graph embedded ELM (GEELM) algorithm is able to naturally exploit both intrinsic and penalty SL criteria that have been (or will be) designed under the graph embedding framework. In addition, we extend the proposed GEELM algorithm in order to be able to exploit SL criteria in arbitrary (even infinite) dimensional ELM spaces. We evaluate the proposed approach on eight standard classification problems and nine publicly available datasets designed for three problems related to human behavior analysis, i.e., the recognition of human face, facial expression, and activity. Experimental results denote the effectiveness of the proposed approach, since it outperforms other ELM-based classification schemes in all the cases.

摘要

在本文中,我们提出了一种极端学习机(ELM)算法的新扩展,用于单隐层前馈神经网络训练,该算法能够在网络输出权重的计算过程中结合子空间学习(SL)准则。所提出的图嵌入 ELM(GEELM)算法能够自然地利用在图嵌入框架下设计的内在和惩罚 SL 准则。此外,我们扩展了所提出的 GEELM 算法,以便能够在任意(甚至无限)维度的 ELM 空间中利用 SL 准则。我们在八个标准分类问题和九个公开可用的数据集上评估了所提出的方法,这些数据集是为与人类行为分析相关的三个问题设计的,即人脸识别、面部表情和活动识别。实验结果表明了所提出方法的有效性,因为它在所有情况下都优于其他基于 ELM 的分类方案。

相似文献

1
Graph Embedded Extreme Learning Machine.图嵌入极限学习机。
IEEE Trans Cybern. 2016 Jan;46(1):311-24. doi: 10.1109/TCYB.2015.2401973. Epub 2015 Mar 2.
2
Close Human Interaction Recognition Using Patch-Aware Models.基于补丁感知模型的近距人类交互识别
IEEE Trans Image Process. 2016 Jan;25(1):167-78. doi: 10.1109/TIP.2015.2498410. Epub 2015 Nov 5.
3
Cross Euclidean-to-Riemannian Metric Learning with Application to Face Recognition from Video.基于欧式到黎曼度量学习的视频人脸识别方法
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2827-2840. doi: 10.1109/TPAMI.2017.2776154. Epub 2017 Nov 22.
4
Extreme learning machine and adaptive sparse representation for image classification.极限学习机和自适应稀疏表示在图像分类中的应用。
Neural Netw. 2016 Sep;81:91-102. doi: 10.1016/j.neunet.2016.06.001. Epub 2016 Jun 23.
5
An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks.用于紧凑型单隐层前馈神经网络的极限学习机改进方法。
Int J Neural Syst. 2008 Oct;18(5):433-41. doi: 10.1142/S0129065708001695.
6
Sparse Bayesian extreme learning machine for multi-classification.稀疏贝叶斯极限学习机的多分类。
IEEE Trans Neural Netw Learn Syst. 2014 Apr;25(4):836-43. doi: 10.1109/TNNLS.2013.2281839.
7
Bidirectional extreme learning machine for regression problem and its learning effectiveness.双向极端学习机在回归问题中的应用及其学习有效性。
IEEE Trans Neural Netw Learn Syst. 2012 Sep;23(9):1498-505. doi: 10.1109/TNNLS.2012.2202289.
8
An Improved Pathological Brain Detection System Based on Two-Dimensional PCA and Evolutionary Extreme Learning Machine.基于二维 PCA 和进化极限学习机的改进病理性脑检测系统。
J Med Syst. 2017 Dec 7;42(1):19. doi: 10.1007/s10916-017-0867-4.
9
A novel multiple instance learning method based on extreme learning machine.一种基于极限学习机的新型多示例学习方法。
Comput Intell Neurosci. 2015;2015:405890. doi: 10.1155/2015/405890. Epub 2015 Feb 3.
10
Classification of imbalanced bioinformatics data by using boundary movement-based ELM.基于边界移动的极限学习机对不平衡生物信息学数据的分类
Biomed Mater Eng. 2015;26 Suppl 1:S1855-62. doi: 10.3233/BME-151488.

引用本文的文献

1
Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.基于增强型自调节极限学习机方法的口语识别。
PLoS One. 2018 Apr 19;13(4):e0194770. doi: 10.1371/journal.pone.0194770. eCollection 2018.