Suppr超能文献

用于关联记忆的脉冲表示学习。

Spiking representation learning for associative memories.

作者信息

Ravichandran Naresh, Lansner Anders, Herman Pawel

机构信息

Computational Cognitive Brain Science Group, Department of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden.

Department of Mathematics, Stockholm University, Stockholm, Sweden.

出版信息

Front Neurosci. 2024 Sep 19;18:1439414. doi: 10.3389/fnins.2024.1439414. eCollection 2024.

Abstract

Networks of interconnected neurons communicating through spiking signals offer the bedrock of neural computations. Our brain's spiking neural networks have the computational capacity to achieve complex pattern recognition and cognitive functions effortlessly. However, solving real-world problems with artificial spiking neural networks (SNNs) has proved to be difficult for a variety of reasons. Crucially, scaling SNNs to large networks and processing large-scale real-world datasets have been challenging, especially when compared to their non-spiking deep learning counterparts. The critical operation that is needed of SNNs is the ability to learn distributed representations from data and use these representations for perceptual, cognitive and memory operations. In this work, we introduce a novel SNN that performs unsupervised representation learning and associative memory operations leveraging Hebbian synaptic and activity-dependent structural plasticity coupled with neuron-units modelled as Poisson spike generators with sparse firing (~1 Hz mean and ~100 Hz maximum firing rate). Crucially, the architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories. We evaluated the model on properties relevant for attractor-based associative memories such as pattern completion, perceptual rivalry, distortion resistance, and prototype extraction.

摘要

通过尖峰信号进行通信的相互连接的神经元网络构成了神经计算的基础。我们大脑的尖峰神经网络具有轻松实现复杂模式识别和认知功能的计算能力。然而,由于各种原因,使用人工尖峰神经网络(SNN)解决现实世界问题已被证明很困难。至关重要的是,将SNN扩展到大型网络并处理大规模现实世界数据集一直具有挑战性,特别是与非尖峰深度学习对应物相比。SNN所需的关键操作是能够从数据中学习分布式表示,并将这些表示用于感知、认知和记忆操作。在这项工作中,我们引入了一种新颖的SNN,它利用赫布突触和活动依赖的结构可塑性,结合建模为具有稀疏放电(平均约1 Hz和最大放电率约100 Hz)的泊松尖峰发生器的神经元单元,执行无监督表示学习和联想记忆操作。至关重要的是,我们模型的架构源自新皮层柱状组织,并结合了用于学习隐藏表示的前馈投影和用于形成联想记忆的循环投影。我们在与基于吸引子的联想记忆相关的属性上评估了该模型,如模式完成、感知竞争、抗失真和原型提取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6cec/11450452/caf37b21d135/fnins-18-1439414-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验