Suppr超能文献

自适应脉冲神经网络中的稀疏计算

Sparse Computation in Adaptive Spiking Neural Networks.

作者信息

Zambrano Davide, Nusselder Roeland, Scholte H Steven, Bohté Sander M

机构信息

Machine Learning Group, CWI, Amsterdam, Netherlands.

Programme Group Brain and Cognition, Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands.

出版信息

Front Neurosci. 2019 Jan 8;12:987. doi: 10.3389/fnins.2018.00987. eCollection 2018.

Abstract

Artificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of communication. This contrasts sharply with biological neurons that communicate sparingly and efficiently using isomorphic binary spikes. While Spiking Neural Networks (SNNs) can be constructed by replacing the units of an ANN with spiking neurons (Cao et al., 2015; Diehl et al., 2015) to obtain reasonable performance, these SNNs use Poisson spiking mechanisms with exceedingly high firing rates compared to their biological counterparts. Here we show how spiking neurons that employ a form of neural coding can be used to construct SNNs that match high-performance ANNs and match or exceed state-of-the-art in SNNs on important benchmarks, while requiring firing rates compatible with biological findings. For this, we use spike-based coding based on the firing rate limiting adaptation phenomenon observed in biological spiking neurons. This phenomenon can be captured in fast adapting spiking neuron models, for which we derive the effective transfer function. Neural units in ANNs trained with this transfer function can be substituted directly with adaptive spiking neurons, and the resulting Adaptive SNNs (AdSNNs) can carry out competitive classification in deep neural networks without further modifications. Adaptive spike-based coding additionally allows for the dynamic control of neural coding precision: we show empirically how a simple model of arousal in AdSNNs further halves the average required firing rate and this notion naturally extends to other forms of attention as studied in neuroscience. AdSNNs thus hold promise as a novel and sparsely active model for neural computation that naturally fits to temporally continuous and asynchronous applications.

摘要

人工神经网络(ANNs)是受生物启发的神经计算模型,已被证明非常有效。然而,ANNs缺乏自然的时间概念,并且ANNs中的神经单元以基于帧的方式交换模拟值,这是一种计算和能量效率低下的通信形式。这与生物神经元形成鲜明对比,生物神经元使用同构二进制脉冲进行稀疏而高效的通信。虽然可以通过用脉冲神经元替换ANN的单元来构建脉冲神经网络(SNNs)(Cao等人,2015年;Diehl等人,2015年)以获得合理的性能,但这些SNNs使用的泊松脉冲机制与它们的生物对应物相比具有极高的发放率。在这里,我们展示了采用一种神经编码形式的脉冲神经元如何用于构建与高性能ANNs匹配且在重要基准测试中在SNNs方面达到或超过现有技术水平的SNNs,同时要求发放率与生物学发现相兼容。为此,我们使用基于在生物脉冲神经元中观察到的发放率限制适应现象的基于脉冲的编码。这种现象可以在快速适应脉冲神经元模型中捕获,我们为其推导了有效传递函数。用此传递函数训练的ANN中的神经单元可以直接用自适应脉冲神经元替代,并且由此产生的自适应SNNs(AdSNNs)可以在深度神经网络中进行竞争性分类而无需进一步修改。基于自适应脉冲的编码还允许对神经编码精度进行动态控制:我们通过实验表明,AdSNNs中一个简单的唤醒模型如何进一步将平均所需发放率减半,并且这个概念自然地扩展到神经科学中研究的其他形式的注意力。因此,AdSNNs有望成为一种新颖的、稀疏活跃的神经计算模型,自然适用于时间上连续和异步的应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b7f/6332470/5f27c41da350/fnins-12-00987-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验