Suppr超能文献

基于信息瓶颈的赫布学习规则自然地将工作记忆与突触更新联系起来。

Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates.

作者信息

Daruwalla Kyle, Lipasti Mikko

机构信息

Cold Spring Harbor Laboratory, Long Island, NY, United States.

Electrical and Computer Engineering Department, University of Wisconsin-Madison, Madison, WI, United States.

出版信息

Front Comput Neurosci. 2024 May 16;18:1240348. doi: 10.3389/fncom.2024.1240348. eCollection 2024.

Abstract

Deep neural feedforward networks are effective models for a wide array of problems, but training and deploying such networks presents a significant energy cost. Spiking neural networks (SNNs), which are modeled after biologically realistic neurons, offer a potential solution when deployed correctly on neuromorphic computing hardware. Still, many applications train SNNs , and running network training directly on neuromorphic hardware is an ongoing research problem. The primary hurdle is that back-propagation, which makes training such artificial deep networks possible, is biologically implausible. Neuroscientists are uncertain about how the brain would propagate a precise error signal backward through a network of neurons. Recent progress addresses part of this question, e.g., the weight transport problem, but a complete solution remains intangible. In contrast, novel learning rules based on the information bottleneck (IB) train each layer of a network independently, circumventing the need to propagate errors across layers. Instead, propagation is implicit due the layers' feedforward connectivity. These rules take the form of a three-factor Hebbian update a global error signal modulates local synaptic updates within each layer. Unfortunately, the global signal for a given layer requires processing multiple samples concurrently, and the brain only sees a single sample at a time. We propose a new three-factor update rule where the global signal correctly captures information across samples via an auxiliary memory network. The auxiliary network can be trained independently of the dataset being used with the primary network. We demonstrate comparable performance to baselines on image classification tasks. Interestingly, unlike back-propagation-like schemes where there is no link between learning and memory, our rule presents a direct connection between working memory and synaptic updates. To the best of our knowledge, this is the first rule to make this link explicit. We explore these implications in initial experiments examining the effect of memory capacity on learning performance. Moving forward, this work suggests an alternate view of learning where each layer balances memory-informed compression against task performance. This view naturally encompasses several key aspects of neural computation, including memory, efficiency, and locality.

摘要

深度前馈神经网络是解决各种问题的有效模型,但训练和部署此类网络会带来巨大的能源成本。脉冲神经网络(SNN)以生物现实神经元为模型,在正确部署到神经形态计算硬件上时提供了一种潜在的解决方案。然而,许多应用仍在训练SNN,而直接在神经形态硬件上运行网络训练仍是一个正在研究的问题。主要障碍在于反向传播,虽然它使训练此类人工深度网络成为可能,但在生物学上却不合理。神经科学家不确定大脑如何通过神经元网络向后传播精确的误差信号。最近的进展解决了这个问题的一部分,例如权重传输问题,但完整的解决方案仍然难以捉摸。相比之下,基于信息瓶颈(IB)的新型学习规则独立训练网络的每一层,避免了跨层传播误差的需要。相反,由于层的前馈连接性,传播是隐含的。这些规则采用三因素赫布更新的形式,全局误差信号调制每一层内的局部突触更新。不幸的是,给定层的全局信号需要同时处理多个样本,而大脑一次只能看到一个样本。我们提出了一种新的三因素更新规则,其中全局信号通过辅助记忆网络正确捕获跨样本的信息。辅助网络可以独立于与主网络一起使用的数据集进行训练。我们在图像分类任务上展示了与基线相当的性能。有趣的是,与学习和记忆之间没有联系的类似反向传播的方案不同,我们的规则在工作记忆和突触更新之间建立了直接联系。据我们所知,这是第一个明确建立这种联系的规则。我们在初步实验中探索这些影响,研究记忆容量对学习性能的影响。展望未来,这项工作提出了一种学习的替代观点,即每一层在基于记忆的压缩与任务性能之间进行平衡。这种观点自然地涵盖了神经计算的几个关键方面,包括记忆、效率和局部性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e324/11137249/e848b5dac73f/fncom-18-1240348-g0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验