Electronic Systems Laboratory, Georgia Tech Research Institute, 400 10th St NW, Atlanta, Georgia 30318, United States of America.
Int J Neural Syst. 2014 Aug;24(5):1440001. doi: 10.1142/S0129065714400012. Epub 2014 Mar 23.
Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
稀疏逼近是一种假设的编码策略,其中一群感觉神经元(例如 V1)使用尽可能少的活动神经元对刺激进行编码。我们提出了尖峰 LCA(局部竞争算法),这是一个基于积分和点火神经元的率编码尖峰神经网络(SNN),用于计算稀疏逼近。Spiking LCA 旨在与非尖峰 LCA 等效,非尖峰 LCA 是一种模拟动力系统,以指数方式收敛于 ℓ(1)-范数稀疏逼近。我们表明,Spiking LCA 的发放率与模拟 LCA 的相同解收敛,误差与采样时间成反比。我们在 NEURON 中模拟了一个由 128 对神经元组成的网络,该网络对 8×8 像素图像块进行编码,证明该网络在 20 毫秒的生物时间内收敛到几乎最优的编码。我们还表明,当在神经元中使用更符合生理现实的参数时,与理想神经元和数字求解器相比,增益函数会鼓励编码中额外的 ℓ(0)-范数稀疏性。