School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30363, USA.
Neural Comput. 2012 Dec;24(12):3317-39. doi: 10.1162/NECO_a_00372. Epub 2012 Sep 12.
The sparse coding hypothesis has generated significant interest in the computational and theoretical neuroscience communities, but there remain open questions about the exact quantitative form of the sparsity penalty and the implementation of such a coding rule in neurally plausible architectures. The main contribution of this work is to show that a wide variety of sparsity-based probabilistic inference problems proposed in the signal processing and statistics literatures can be implemented exactly in the common network architecture known as the locally competitive algorithm (LCA). Among the cost functions we examine are approximate l(p) norms (0 ≤ p ≤ 2), modified l(p)-norms, block-l1 norms, and reweighted algorithms. Of particular interest is that we show significantly increased performance in reweighted l1 algorithms by inferring all parameters jointly in a dynamical system rather than using an iterative approach native to digital computational architectures.
稀疏编码假说在计算神经科学和理论神经科学领域引起了广泛关注,但关于稀疏惩罚的确切定量形式以及在神经上合理的体系结构中实现这种编码规则,仍存在一些悬而未决的问题。这项工作的主要贡献在于表明,信号处理和统计学文献中提出的各种基于稀疏的概率推理问题,可以在称为局部竞争算法 (LCA) 的常见网络架构中精确实现。我们研究的成本函数包括近似 l(p) 范数 (0 ≤ p ≤ 2)、修改后的 l(p)-范数、块 l1 范数和重加权算法。特别有趣的是,我们通过在动态系统中联合推断所有参数,而不是使用数字计算体系结构固有的迭代方法,在重加权 l1 算法中显示出显著提高的性能。