• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Adding learning to cellular genetic algorithms for training recurrent neural networks.

作者信息

Ku K W, Mak M W, Siu W C

机构信息

Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hong Kong.

出版信息

IEEE Trans Neural Netw. 1999;10(2):239-52. doi: 10.1109/72.750546.

DOI:10.1109/72.750546
PMID:18252524
Abstract

This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GA's) for training recurrent neural networks (RNN's). Each weight of an RNN is encoded as a floating point number, and a concatenation of the numbers forms a chromosome. Reproduction takes place locally in a square grid with each grid point representing a chromosome. Two approaches, Lamarckian and Baldwinian mechanisms, for combining cellular GA's and learning have been compared. Different hill-climbing algorithms are incorporated into the cellular GA's as learning methods. These include the real-time recurrent learning (RTRL) and its simplified versions, and the delta rule. The RTRL algorithm has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, which is the simplest form of learning, has been implemented by considering the RNN's as feedforward networks during learning. The hybrid algorithms are used to train the RNN's to solve a long-term dependency problem. The results show that Baldwinian learning is inefficient in assisting the cellular GA. It is conjectured that the more difficult it is for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning, the poorer is the convergence of Baldwinian learning. Most of the combinations using the Lamarckian mechanism show an improvement in reducing the number of generations required for an optimum network; however, only a few can reduce the actual time taken. Embedding the delta rule in the cellular GA's has been found to be the fastest method. It is also concluded that learning should not be too extensive if the hybrid algorithm is to be benefit from learning.

摘要

相似文献

1
Adding learning to cellular genetic algorithms for training recurrent neural networks.
IEEE Trans Neural Netw. 1999;10(2):239-52. doi: 10.1109/72.750546.
2
Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.离散时域递归神经网络的鲁棒自适应梯度下降训练算法
IEEE Trans Neural Netw. 2008 Nov;19(11):1841-53. doi: 10.1109/TNN.2008.2001923.
3
Existence and learning of oscillations in recurrent neural networks.
IEEE Trans Neural Netw. 2000;11(1):205-14. doi: 10.1109/72.822523.
4
Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences.从在混沌符号序列上训练的循环神经网络中提取有限状态表示。
IEEE Trans Neural Netw. 1999;10(2):284-302. doi: 10.1109/72.750555.
5
Empirical investigation of the benefits of partial Lamarckianism.
Evol Comput. 1997 Spring;5(1):31-60. doi: 10.1162/evco.1997.5.1.31.
6
Training Recurrent Neural Networks With the Levenberg-Marquardt Algorithm for Optimal Control of a Grid-Connected Converter.使用 Levenberg-Marquardt 算法对并网变流器进行最优控制的循环神经网络训练。
IEEE Trans Neural Netw Learn Syst. 2015 Sep;26(9):1900-12. doi: 10.1109/TNNLS.2014.2361267. Epub 2014 Oct 15.
7
Emergence of belief-like representations through reinforcement learning.通过强化学习产生类似信念的表征。
bioRxiv. 2023 Apr 4:2023.04.04.535512. doi: 10.1101/2023.04.04.535512.
8
Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm.使用混合田口-遗传算法调整神经网络的结构和参数。
IEEE Trans Neural Netw. 2006 Jan;17(1):69-80. doi: 10.1109/TNN.2005.860885.
9
Multiobjective hybrid optimization and training of recurrent neural networks.递归神经网络的多目标混合优化与训练
IEEE Trans Syst Man Cybern B Cybern. 2008 Apr;38(2):381-403. doi: 10.1109/TSMCB.2007.912937.
10
Decision feedback recurrent neural equalization with fast convergence rate.具有快速收敛速率的判决反馈递归神经均衡器。
IEEE Trans Neural Netw. 2005 May;16(3):699-708. doi: 10.1109/TNN.2005.845142.

引用本文的文献

1
Enhancing robot evolution through Lamarckian principles.通过拉马克原理促进机器人进化。
Sci Rep. 2023 Nov 30;13(1):21109. doi: 10.1038/s41598-023-48338-4.
2
Application of Meta-Heuristic Algorithms for Training Neural Networks and Deep Learning Architectures: A Comprehensive Review.元启发式算法在神经网络和深度学习架构训练中的应用:全面综述。
Neural Process Lett. 2022 Oct 31:1-104. doi: 10.1007/s11063-022-11055-6.