Maass Wolfgang, Natschläger Thomas, Markram Henry
Institute for Theoretical Computer Science, Technische Universität Graz, A-8010 Graz, Austria.
Neural Comput. 2002 Nov;14(11):2531-60. doi: 10.1162/089976602760407955.
A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.
神经建模面临的一个关键挑战是解释来自快速变化环境的连续多模态输入流如何被积分发放神经元的定型递归电路实时处理。我们提出了一种用于对时变输入进行实时计算的新计算模型,它为基于图灵机或吸引子神经网络的范式提供了一种替代方案。它不需要依赖任务构建神经电路。相反,它基于高维动力系统原理与统计学习理论相结合,并且可以在通用的进化或发现的递归电路上实现。结果表明,由足够大且异质的神经电路形成的高维动力系统的固有瞬态动力学可作为通用模拟衰退记忆。读出神经元可以学会从这种递归神经电路的当前状态实时提取有关当前和过去输入的信息,这些信息可能是各种任务所需要的。由于动力系统的高维性,瞬态内部状态可以通过读出神经元转换为稳定的目标输出,因此给出稳定输出不需要稳定的内部状态。我们的方法基于一个严格的计算模型——液态机器,它与图灵机不同,不需要在定义明确的离散内部状态之间进行顺序转换。与图灵机一样,它也得到了严格数学结果的支持,这些结果预测了在理想化条件下的通用计算能力,但适用于时变输入实时处理的生物学上更现实的场景。我们的方法为神经编码的解释、神经生理学实验设计和数据分析以及机器人技术和神经技术问题的解决提供了新的视角。