• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

连续时间递归神经网络中的自优化

Self-Optimization in Continuous-Time Recurrent Neural Networks.

作者信息

Zarco Mario, Froese Tom

机构信息

Departamento de Ciencias de la Computación, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Mexico City, Mexico.

Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Mexico City, Mexico.

出版信息

Front Robot AI. 2018 Aug 21;5:96. doi: 10.3389/frobt.2018.00096. eCollection 2018.

DOI:10.3389/frobt.2018.00096
PMID:33500975
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7805835/
Abstract

A recent advance in complex adaptive systems has revealed a new unsupervised learning technique called self-modeling or self-optimization. Basically, a complex network that can form an associative memory of the state configurations of the attractors on which it converges will optimize its structure: it will spontaneously generalize over these typically suboptimal attractors and thereby also reinforce more optimal attractors-even if these better solutions are normally so hard to find that they have never been previously visited. Ideally, after sufficient self-optimization the most optimal attractor dominates the state space, and the network will converge on it from any initial condition. This technique has been applied to social networks, gene regulatory networks, and neural networks, but its application to less restricted neural controllers, as typically used in evolutionary robotics, has not yet been attempted. Here we show for the first time that the self-optimization process can be implemented in a continuous-time recurrent neural network with asymmetrical connections. We discuss several open challenges that must still be addressed before this technique could be applied in actual robotic scenarios.

摘要

复杂自适应系统的一项最新进展揭示了一种名为自建模或自优化的新型无监督学习技术。基本上,一个能够对其收敛的吸引子的状态配置形成关联记忆的复杂网络会优化其结构:它会自发地对这些通常次优的吸引子进行泛化,从而也强化更优的吸引子——即便这些更好的解决方案通常极难找到,以至于此前从未被发现过。理想情况下,经过充分的自优化后,最优的吸引子会主导状态空间,并且网络将从任何初始条件收敛到该吸引子上。这项技术已应用于社交网络、基因调控网络和神经网络,但尚未尝试将其应用于进化机器人学中通常使用的限制较少的神经控制器。在此我们首次表明,自优化过程可以在具有非对称连接的连续时间递归神经网络中实现。我们讨论了在将该技术应用于实际机器人场景之前仍需解决的几个开放性挑战。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cc67/7805835/c20f49c2f002/frobt-05-00096-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cc67/7805835/d68185b55768/frobt-05-00096-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cc67/7805835/c20f49c2f002/frobt-05-00096-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cc67/7805835/d68185b55768/frobt-05-00096-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cc67/7805835/c20f49c2f002/frobt-05-00096-g0002.jpg

相似文献

1
Self-Optimization in Continuous-Time Recurrent Neural Networks.连续时间递归神经网络中的自优化
Front Robot AI. 2018 Aug 21;5:96. doi: 10.3389/frobt.2018.00096. eCollection 2018.
2
Network capacity analysis for latent attractor computation.用于潜在吸引子计算的网络容量分析。
Network. 2003 May;14(2):273-302.
3
Unsupervised Learning Facilitates Neural Coordination Across the Functional Clusters of the Connectome.无监督学习促进了连接组功能簇间的神经协调。
Front Robot AI. 2020 Apr 2;7:40. doi: 10.3389/frobt.2020.00040. eCollection 2020.
4
Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.偶尔中断正常发射模式可以增强神经协调:一个自优化的尖峰神经网络模型。
Neural Netw. 2015 Feb;62:39-46. doi: 10.1016/j.neunet.2014.08.011. Epub 2014 Sep 16.
5
NDRAM: nonlinear dynamic recurrent associative memory for learning bipolar and nonbipolar correlated patterns.NDRAM:用于学习双极和非双极相关模式的非线性动态递归联想记忆
IEEE Trans Neural Netw. 2005 Nov;16(6):1393-400. doi: 10.1109/TNN.2005.852861.
6
Collective computational intelligence in biology - Emergence of memory in somatic tissues.生物学中的集体计算智能——体细胞组织中记忆的出现。
Biosystems. 2023 Jan;223:104816. doi: 10.1016/j.biosystems.2022.104816. Epub 2022 Nov 25.
7
Coexistence of Cyclic Sequential Pattern Recognition and Associative Memory in Neural Networks by Attractor Mechanisms.通过吸引子机制实现神经网络中循环序列模式识别与联想记忆的共存。
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4959-4970. doi: 10.1109/TNNLS.2024.3368092. Epub 2025 Feb 28.
8
Bistable gradient networks. I. Attractors and pattern retrieval at low loading in the thermodynamic limit.双稳梯度网络。I. 热力学极限下低负载时的吸引子与模式检索
Phys Rev E Stat Nonlin Soft Matter Phys. 2003 Jan;67(1 Pt 2):016118. doi: 10.1103/PhysRevE.67.016118. Epub 2003 Jan 30.
9
Effect of dilution in asymmetric recurrent neural networks.不对称递归神经网络中的稀释效应。
Neural Netw. 2018 Aug;104:50-59. doi: 10.1016/j.neunet.2018.04.003. Epub 2018 Apr 16.
10
The road to chaos by time-asymmetric Hebbian learning in recurrent neural networks.循环神经网络中由时间不对称赫布学习导致的混沌之路。
Neural Comput. 2007 Jan;19(1):80-110. doi: 10.1162/neco.2007.19.1.80.

引用本文的文献

1
Irruption and Absorption: A 'Black-Box' Framework for How Mind and Matter Make a Difference to Each Other.闯入与吸纳:关于心灵与物质如何相互产生影响的“黑箱”框架
Entropy (Basel). 2024 Mar 27;26(4):288. doi: 10.3390/e26040288.
2
Irruption Theory: A Novel Conceptualization of the Enactive Account of Motivated Activity.闯入理论:对有动机活动的具身化解释的一种新颖概念化
Entropy (Basel). 2023 May 2;25(5):748. doi: 10.3390/e25050748.
3
Unsupervised Learning Facilitates Neural Coordination Across the Functional Clusters of the Connectome.

本文引用的文献

1
Multistability in Large Scale Models of Brain Activity.大脑活动大规模模型中的多重稳定性。
PLoS Comput Biol. 2015 Dec 28;11(12):e1004644. doi: 10.1371/journal.pcbi.1004644. eCollection 2015 Dec.
2
Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.偶尔中断正常发射模式可以增强神经协调:一个自优化的尖峰神经网络模型。
Neural Netw. 2015 Feb;62:39-46. doi: 10.1016/j.neunet.2014.08.011. Epub 2014 Sep 16.
3
Using Effect Size-or Why the P Value Is Not Enough.
无监督学习促进了连接组功能簇间的神经协调。
Front Robot AI. 2020 Apr 2;7:40. doi: 10.3389/frobt.2020.00040. eCollection 2020.
使用效应量——为何P值并不足够。
J Grad Med Educ. 2012 Sep;4(3):279-82. doi: 10.4300/JGME-D-12-00156.1.
4
A dynamical systems account of sensorimotor contingencies.一种关于感觉运动关联的动力系统解释。
Front Psychol. 2013 May 27;4:285. doi: 10.3389/fpsyg.2013.00285. eCollection 2013.
5
Global adaptation in networks of selfish components: emergent associative memory at the system scale.全球自私组件网络的适应性:系统尺度上涌现的联想记忆。
Artif Life. 2011 Summer;17(3):147-66. doi: 10.1162/artl_a_00029. Epub 2011 May 9.
6
A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.关于赫布学习规则对离散时间随机递归神经网络的动力学和结构影响的数学分析。
Neural Comput. 2008 Dec;20(12):2937-66. doi: 10.1162/neco.2008.05-07-530.
7
Optimization by simulated annealing.模拟退火优化。
Science. 1983 May 13;220(4598):671-80. doi: 10.1126/science.220.4598.671.
8
Resilient machines through continuous self-modeling.通过持续自我建模实现的弹性机器。
Science. 2006 Nov 17;314(5802):1118-21. doi: 10.1126/science.1133687.
9
Dynamical approaches to cognitive science.认知科学的动力学方法。
Trends Cogn Sci. 2000 Mar;4(3):91-99. doi: 10.1016/s1364-6613(99)01440-0.
10
Neural networks and physical systems with emergent collective computational abilities.具有涌现集体计算能力的神经网络与物理系统。
Proc Natl Acad Sci U S A. 1982 Apr;79(8):2554-8. doi: 10.1073/pnas.79.8.2554.