• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

自适应学习中具有确定性权重修改的放大梯度函数

Magnified gradient function with deterministic weight modification in adaptive learning.

作者信息

Ng Sin-Chun, Cheung Chi-Chung, Leung Shu-Hung

机构信息

School of Science and Technology, The Open University of Hong Kong, Hong Kong, China.

出版信息

IEEE Trans Neural Netw. 2004 Nov;15(6):1411-23. doi: 10.1109/TNN.2004.836237.

DOI:10.1109/TNN.2004.836237
PMID:15565769
Abstract

This paper presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.

摘要

本文提出了两种新颖的方法,即具有放大梯度函数的反向传播(BP)算法(MGFPROP)和确定性权重修正(DWM),以加快收敛速度并提高标准BP学习算法的全局收敛能力。MGFPROP的目的是通过放大激活函数的梯度函数来提高收敛速度,而DWM的主要目标是以确定性的方式改变多层前馈神经网络的权重来减少系统误差。仿真结果表明,对于许多学习问题,上述两种方法的性能优于BP算法和其他改进的BP算法。此外,将上述两种方法集成形成一种名为MDPROP的新算法,可以进一步提高MGFPROP和DWM的性能。从我们的仿真结果来看,MDPROP算法在收敛速度和全局收敛能力方面始终优于BP算法和其他改进的BP算法。

相似文献

1
Magnified gradient function with deterministic weight modification in adaptive learning.自适应学习中具有确定性权重修改的放大梯度函数
IEEE Trans Neural Netw. 2004 Nov;15(6):1411-23. doi: 10.1109/TNN.2004.836237.
2
On adaptive learning rate that guarantees convergence in feedforward networks.关于保证前馈网络收敛的自适应学习率。
IEEE Trans Neural Netw. 2006 Sep;17(5):1116-25. doi: 10.1109/TNN.2006.878121.
3
Convergence of gradient method with momentum for two-layer feedforward neural networks.用于两层前馈神经网络的带动量梯度法的收敛性
IEEE Trans Neural Netw. 2006 Mar;17(2):522-5. doi: 10.1109/TNN.2005.863460.
4
Implementing online natural gradient learning: problems and solutions.实现在线自然梯度学习:问题与解决方案。
IEEE Trans Neural Netw. 2006 Mar;17(2):317-29. doi: 10.1109/TNN.2005.863406.
5
On the weight convergence of Elman networks.关于埃尔曼网络的权重收敛性。
IEEE Trans Neural Netw. 2010 Mar;21(3):463-80. doi: 10.1109/TNN.2009.2039226. Epub 2010 Feb 2.
6
Global convergence of online BP training with dynamic learning rate.在线 BP 训练的动态学习率全局收敛。
IEEE Trans Neural Netw Learn Syst. 2012 Feb;23(2):330-41. doi: 10.1109/TNNLS.2011.2178315.
7
Neural network learning with global heuristic search.基于全局启发式搜索的神经网络学习
IEEE Trans Neural Netw. 2007 May;18(3):937-42. doi: 10.1109/TNN.2007.891633.
8
Decision feedback recurrent neural equalization with fast convergence rate.具有快速收敛速率的判决反馈递归神经均衡器。
IEEE Trans Neural Netw. 2005 May;16(3):699-708. doi: 10.1109/TNN.2005.845142.
9
Parameter incremental learning algorithm for neural networks.神经网络的参数增量学习算法
IEEE Trans Neural Netw. 2006 Nov;17(6):1424-38. doi: 10.1109/TNN.2006.880581.
10
Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.离散时域递归神经网络的鲁棒自适应梯度下降训练算法
IEEE Trans Neural Netw. 2008 Nov;19(11):1841-53. doi: 10.1109/TNN.2008.2001923.

引用本文的文献

1
Archive-based coronavirus herd immunity algorithm for optimizing weights in neural networks.基于存档的冠状病毒群体免疫算法用于优化神经网络中的权重
Neural Comput Appl. 2023;35(21):15923-15941. doi: 10.1007/s00521-023-08577-y. Epub 2023 Apr 19.