• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

异步随机电子神经网络中的学习吸引子

Learning attractors in an asynchronous, stochastic electronic neural network.

作者信息

Del Giudice P, Fusi S, Badoni D, Dante V, Amit D J

机构信息

Istituto Superiore di Sanità, Physics Laboratory, Rome, Italy.

出版信息

Network. 1998 May;9(2):183-205. doi: 10.1088/0954-898x/9/2/003.

DOI:10.1088/0954-898x/9/2/003
PMID:9861985
Abstract

LANN27 is an electronic device implementing in discrete electronics a fully connected (full feedback) network of 27 neurons and 351 plastic synapses with stochastic Hebbian learning. Both neurons and synapses are dynamic elements, with two time constants--fast for neurons and slow for synapses. Learning, synaptic dynamics, is analogue and is driven in a Hebbian way by neural activities. Long-term memorization takes place on a discrete set of synaptic efficacies and is effected in a stochastic manner. The intense feedback between the nonlinear neural elements, via the learned synaptic structure, creates in an organic way a set of attractors for the collective retrieval dynamics of the neural system, akin to Hebbian learned reverberations. The resulting structure of the attractors is a record of the large-scale statistics in the uncontrolled, incoming flow of stimuli. As the statistics in the stimulus flow changes significantly, the attractors slowly follow it and the network behaves as a palimpsest--old is gradually replaced by new. Moreover, the slow learning creates attractors which render the network a prototype extractor: entire clouds of stimuli, noisy versions of a prototype, used in training, all retrieve the attractor corresponding to the prototype upon retrieval. Here we describe the process of studying the collective dynamics of the network, before, during and following learning, which is rendered complex by the richness of the possible stimulus streams and the large dimensionality of the space of states of the network. We propose sampling techniques and modes of representation for the outcome.

摘要

LANN27是一种电子设备,它在离散电子学中实现了一个由27个神经元和351个具有随机赫布学习的可塑性突触组成的全连接(全反馈)网络。神经元和突触都是动态元件,具有两个时间常数——神经元的时间常数快,突触的时间常数慢。学习,即突触动力学,是模拟的,并由神经活动以赫布方式驱动。长期记忆发生在一组离散的突触效能上,并以随机方式实现。非线性神经元件之间通过学习到的突触结构进行的强烈反馈,以一种有机的方式为神经系统的集体检索动力学创建了一组吸引子,类似于赫布学习的回响。吸引子的最终结构是在不受控制的传入刺激流中的大规模统计记录。随着刺激流中的统计数据发生显著变化,吸引子会缓慢跟随,网络就像一本重写本——旧的逐渐被新的取代。此外,缓慢的学习创建了吸引子,使网络成为一个原型提取器:在训练中使用的整个刺激云,即原型的噪声版本,在检索时都会检索到与原型对应的吸引子。在这里,我们描述了在学习之前、期间和之后研究网络集体动力学的过程,由于可能的刺激流丰富以及网络状态空间维度大,这个过程变得很复杂。我们提出了结果的采样技术和表示模式。

相似文献

1
Learning attractors in an asynchronous, stochastic electronic neural network.异步随机电子神经网络中的学习吸引子
Network. 1998 May;9(2):183-205. doi: 10.1088/0954-898x/9/2/003.
2
Slow stochastic Hebbian learning of classes of stimuli in a recurrent neural network.循环神经网络中刺激类别的缓慢随机赫布学习
Network. 1998 Feb;9(1):123-52.
3
Hebbian learning of context in recurrent neural networks.循环神经网络中上下文的赫布学习
Neural Comput. 1996 Nov 15;8(8):1677-710. doi: 10.1162/neco.1996.8.8.1677.
4
Hebbian spike-driven synaptic plasticity for learning patterns of mean firing rates.用于学习平均发放率模式的赫布型脉冲驱动突触可塑性。
Biol Cybern. 2002 Dec;87(5-6):459-70. doi: 10.1007/s00422-002-0356-8.
5
A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.关于赫布学习规则对离散时间随机递归神经网络的动力学和结构影响的数学分析。
Neural Comput. 2008 Dec;20(12):2937-66. doi: 10.1162/neco.2008.05-07-530.
6
The road to chaos by time-asymmetric Hebbian learning in recurrent neural networks.循环神经网络中由时间不对称赫布学习导致的混沌之路。
Neural Comput. 2007 Jan;19(1):80-110. doi: 10.1162/neco.2007.19.1.80.
7
Convergence of stochastic learning in perceptrons with binary synapses.具有二元突触的感知器中随机学习的收敛性。
Phys Rev E Stat Nonlin Soft Matter Phys. 2005 Jun;71(6 Pt 1):061907. doi: 10.1103/PhysRevE.71.061907. Epub 2005 Jun 16.
8
Computing with continuous attractors: stability and online aspects.基于连续吸引子的计算:稳定性与在线特性
Neural Comput. 2005 Oct;17(10):2215-39. doi: 10.1162/0899766054615626.
9
Learning in realistic networks of spiking neurons and spike-driven plastic synapses.在具有脉冲发放神经元和脉冲驱动可塑性突触的真实网络中进行学习。
Eur J Neurosci. 2005 Jun;21(11):3143-60. doi: 10.1111/j.1460-9568.2005.04087.x.
10
Learning real-world stimuli in a neural network with spike-driven synaptic dynamics.在具有脉冲驱动突触动力学的神经网络中学习现实世界的刺激。
Neural Comput. 2007 Nov;19(11):2881-912. doi: 10.1162/neco.2007.19.11.2881.

引用本文的文献

1
The Constrained Disorder Principle Overcomes the Challenges of Methods for Assessing Uncertainty in Biological Systems.约束无序原则克服了生物系统不确定性评估方法的挑战。
J Pers Med. 2024 Dec 28;15(1):10. doi: 10.3390/jpm15010010.
2
Excitatory/inhibitory balance emerges as a key factor for RBN performance, overriding attractor dynamics.兴奋性/抑制性平衡成为红核表现的关键因素,超越了吸引子动力学。
Front Comput Neurosci. 2023 Aug 9;17:1223258. doi: 10.3389/fncom.2023.1223258. eCollection 2023.