• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

比较具有相似隐藏层权重的支持向量机和前馈神经网络。

Comparing support vector machines and feedforward neural networks with similar hidden-layer weights.

作者信息

Romero Enrique, Toppo Daniel

出版信息

IEEE Trans Neural Netw. 2007 May;18(3):959-63. doi: 10.1109/TNN.2007.891656.

DOI:10.1109/TNN.2007.891656
PMID:17526367
Abstract

Support vector machines (SVMs) usually need a large number of support vectors to form their output Recently, several models have been proposed to build SVMs with a small number of basis functions, maintaining the property that their hidden-layer weights are a subset of the data (the support vectors). This property is also present in some algorithms for feedforward neural networks (FNNs) that construct the network sequentially, leading to sparse models where the number of hidden units can be explicitly controlled. An experimental study on several benchmark data sets, comparing SVMs and the aforementioned sequential FNNs, was carried out. The experiments were performed in the same conditions for all the models, and they can be seen as a comparison of SVMs and FNNs when both models are restricted to use similar hidden-layer weights. Accuracies were found to be very similar. Regarding the number of support vectors, sequential FNNs constructed models with less hidden units than standard SVMs and in the same range as "sparse" SVMs. Computational times were lower for SVMs.

摘要

支持向量机(SVM)通常需要大量支持向量来形成其输出。最近,已经提出了几种模型来构建具有少量基函数的支持向量机,同时保持其隐藏层权重是数据(支持向量)的一个子集这一特性。在一些用于前馈神经网络(FNN)的算法中也存在此特性,这些算法按顺序构建网络,从而得到可以明确控制隐藏单元数量的稀疏模型。针对几个基准数据集开展了一项实验研究,比较了支持向量机和上述顺序前馈神经网络。所有模型均在相同条件下进行实验,这些实验可视作在两个模型都被限制使用相似隐藏层权重时对支持向量机和前馈神经网络的比较。结果发现准确率非常相似。关于支持向量的数量,顺序前馈神经网络构建的模型比标准支持向量机具有更少的隐藏单元,且与“稀疏”支持向量机处于相同范围。支持向量机的计算时间更短。

相似文献

1
Comparing support vector machines and feedforward neural networks with similar hidden-layer weights.比较具有相似隐藏层权重的支持向量机和前馈神经网络。
IEEE Trans Neural Netw. 2007 May;18(3):959-63. doi: 10.1109/TNN.2007.891656.
2
Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks.比较最小化误差的极限学习机和支持向量序贯前馈神经网络。
Neural Netw. 2012 Jan;25(1):122-9. doi: 10.1016/j.neunet.2011.08.005. Epub 2011 Sep 6.
3
Hidden space support vector machines.隐空间支持向量机
IEEE Trans Neural Netw. 2004 Nov;15(6):1424-34. doi: 10.1109/TNN.2004.831161.
4
Feature selection in MLPs and SVMs based on maximum output information.基于最大输出信息的多层感知器和支持向量机中的特征选择
IEEE Trans Neural Netw. 2004 Jul;15(4):937-48. doi: 10.1109/TNN.2004.828772.
5
Incremental training of support vector machines.支持向量机的增量训练
IEEE Trans Neural Netw. 2005 Jan;16(1):114-31. doi: 10.1109/TNN.2004.836201.
6
Estimating the number of hidden neurons in a feedforward network using the singular value decomposition.使用奇异值分解估计前馈网络中隐藏神经元的数量。
IEEE Trans Neural Netw. 2006 Nov;17(6):1623-9. doi: 10.1109/TNN.2006.880582.
7
An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks.用于紧凑型单隐层前馈神经网络的极限学习机改进方法。
Int J Neural Syst. 2008 Oct;18(5):433-41. doi: 10.1142/S0129065708001695.
8
Training hard-margin support vector machines using greedy stagewise algorithm.使用贪婪逐阶段算法训练硬间隔支持向量机。
IEEE Trans Neural Netw. 2008 Aug;19(8):1446-55. doi: 10.1109/TNN.2008.2000576.
9
A bottom-up method for simplifying support vector solutions.
IEEE Trans Neural Netw. 2006 May;17(3):792-6. doi: 10.1109/TNN.2006.873287.
10
Associative memory design using support vector machines.使用支持向量机的关联记忆设计。
IEEE Trans Neural Netw. 2006 Sep;17(5):1165-74. doi: 10.1109/TNN.2006.877539.