Eliasmith Chris, Martens James
Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON N2L3G1, Canada.
Biol Cybern. 2011 May;104(4-5):251-62. doi: 10.1007/s00422-011-0433-y. Epub 2011 May 14.
Recently, there have been a number of proposals regarding how biologically plausible neural networks might perform probabilistic inference (Rao, Neural Computation, 16(1):1-38, 2004; Eliasmith and Anderson, Neural engineering: computation, representation and dynamics in neurobiological systems, 2003; Ma et al., Nature Neuroscience, 9(11):1432-1438, 2006; Sahani and Dayan, Neural Computation, 15(10):2255-2279, 2003). To be able to repeatedly perform such inference, it is essential that the represented distributions be appropriately normalized. Past approaches have considered normalization mechanisms independently of inference, often leaving them unexplored, or appealing to a notion of divisive normalization that requires pooling across many neurons. Here, we demonstrate how normalization and inference can be combined into an appropriate connection matrix, eliminating the need for pooling or a division-like operation. We algebraically demonstrate that such a solution is available regardless of the inference being performed. We show that such a solution is relevant to neural computation by implementing it in a recurrent spiking neural network.
最近,关于具有生物学合理性的神经网络如何执行概率推理出现了许多提议(拉奥,《神经计算》,第16卷第1期:1 - 38页,2004年;埃利亚史密斯和安德森,《神经工程:神经生物学系统中的计算、表示与动力学》,2003年;马等人,《自然神经科学》,第9卷第11期:1432 - 1438页,2006年;萨哈尼和达扬,《神经计算》,第15卷第10期:2255 - 2279页,2003年)。为了能够反复执行这种推理,所表示的分布进行适当归一化至关重要。过去的方法独立于推理来考虑归一化机制,往往对其未作探索,或者诉诸一种需要跨多个神经元进行池化的归一化概念。在这里,我们展示了如何将归一化和推理结合到一个合适的连接矩阵中,从而无需池化或类似除法的操作。我们通过代数证明,无论执行何种推理,这样的解决方案都是可行的。我们通过在循环脉冲神经网络中实现它,表明这样的解决方案与神经计算相关。