• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Clustering-based algorithms for single-hidden-layer sigmoid perceptron.

作者信息

Uykan Z

机构信息

Control Eng. Lab., Helsinki Univ. of Technol., Espoo, Finland.

出版信息

IEEE Trans Neural Netw. 2003;14(3):708-15. doi: 10.1109/TNN.2003.813532.

DOI:10.1109/TNN.2003.813532
PMID:18238052
Abstract

Gradient-descent type supervised learning is the most commonly used algorithm for design of the standard sigmoid perceptron (SP). However, it is computationally expensive (slow) and has the local-minima problem. Moody and Darken (1989) proposed an input-clustering based hierarchical algorithm for fast learning in networks of locally tuned neurons in the context of radial basis function networks. We propose and analyze input clustering (IC) and input-output clustering (IOC)-based algorithms for fast learning in networks of globally tuned neurons in the context of the SP. It is shown that "localizing'' the input layer weights of the SP by the IC and the IOC minimizes an upper bound to the SP output error. The proposed algorithms could possibly be used also to initialize the SP weights for the conventional gradient-descent learning. Simulation results offer that the SPs designed by the IC and the IOC yield comparable performance in comparison with its radial basis function network counterparts.

摘要

相似文献

1
Clustering-based algorithms for single-hidden-layer sigmoid perceptron.
IEEE Trans Neural Netw. 2003;14(3):708-15. doi: 10.1109/TNN.2003.813532.
2
Analysis of input-output clustering for determining centers of RBFN.用于确定径向基函数网络中心的输入-输出聚类分析。
IEEE Trans Neural Netw. 2000;11(4):851-8. doi: 10.1109/72.857766.
3
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
4
An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks.用于紧凑型单隐层前馈神经网络的极限学习机改进方法。
Int J Neural Syst. 2008 Oct;18(5):433-41. doi: 10.1142/S0129065708001695.
5
Design of double fuzzy clustering-driven context neural networks.双模糊聚类驱动的上下文神经网络设计。
Neural Netw. 2018 Aug;104:1-14. doi: 10.1016/j.neunet.2018.03.018. Epub 2018 Apr 9.
6
Novel maximum-margin training algorithms for supervised neural networks.用于监督神经网络的新型最大间隔训练算法。
IEEE Trans Neural Netw. 2010 Jun;21(6):972-84. doi: 10.1109/TNN.2010.2046423. Epub 2010 Apr 19.
7
Radial basis function networks with linear interval regression weights for symbolic interval data.具有线性区间回归权重的径向基函数网络用于符号区间数据。
IEEE Trans Syst Man Cybern B Cybern. 2012 Feb;42(1):69-80. doi: 10.1109/TSMCB.2011.2161468. Epub 2011 Aug 18.
8
Merging Back-propagation and Hebbian Learning Rules for Robust Classifications.融合反向传播和赫布学习规则以实现稳健分类
Neural Netw. 1996 Oct;9(7):1213-1222. doi: 10.1016/0893-6080(96)00042-1.
9
Projection-based fast learning fully complex-valued relaxation neural network.基于投影的快速学习全复值松弛神经网络。
IEEE Trans Neural Netw Learn Syst. 2013 Apr;24(4):529-41. doi: 10.1109/TNNLS.2012.2235460.
10
Learning algorithms based on linearization.基于线性化的学习算法。
Network. 1998 Aug;9(3):363-80.