Soriano Miguel C, Brunner Daniel, Escalona-Morán Miguel, Mirasso Claudio R, Fischer Ingo
Instituto de Física Interdisciplinar y Sistemas Complejos, (UIB-CSIC) Palma de Mallorca, Spain.
Front Comput Neurosci. 2015 Jun 2;9:68. doi: 10.3389/fncom.2015.00068. eCollection 2015.
To learn and mimic how the brain processes information has been a major research challenge for decades. Despite the efforts, little is known on how we encode, maintain and retrieve information. One of the hypothesis assumes that transient states are generated in our intricate network of neurons when the brain is stimulated by a sensory input. Based on this idea, powerful computational schemes have been developed. These schemes, known as machine-learning techniques, include artificial neural networks, support vector machine and reservoir computing, among others. In this paper, we concentrate on the reservoir computing (RC) technique using delay-coupled systems. Unlike traditional RC, where the information is processed in large recurrent networks of interconnected artificial neurons, we choose a minimal design, implemented via a simple nonlinear dynamical system subject to a self-feedback loop with delay. This design is not intended to represent an actual brain circuit, but aims at finding the minimum ingredients that allow developing an efficient information processor. This simple scheme not only allows us to address fundamental questions but also permits simple hardware implementations. By reducing the neuro-inspired reservoir computing approach to its bare essentials, we find that nonlinear transient responses of the simple dynamical system enable the processing of information with excellent performance and at unprecedented speed. We specifically explore different hardware implementations and, by that, we learn about the role of nonlinearity, noise, system responses, connectivity structure, and the quality of projection onto the required high-dimensional state space. Besides the relevance for the understanding of basic mechanisms, this scheme opens direct technological opportunities that could not be addressed with previous approaches.
几十年来,了解并模仿大脑处理信息的方式一直是一项重大的研究挑战。尽管付出了诸多努力,但我们对信息的编码、存储和检索方式仍知之甚少。其中一种假设认为,当大脑受到感官输入刺激时,在复杂的神经元网络中会产生瞬态。基于这一想法,人们开发了强大的计算方案。这些方案被称为机器学习技术,包括人工神经网络、支持向量机和储层计算等。在本文中,我们专注于使用延迟耦合系统的储层计算(RC)技术。与传统的RC不同,传统RC中信息是在由相互连接的人工神经元组成的大型递归网络中处理的,我们选择了一种极简设计,通过一个带有延迟自反馈回路的简单非线性动力系统来实现。这种设计并非旨在代表实际的脑回路,而是旨在找到能够开发高效信息处理器的最小要素。这种简单的方案不仅使我们能够解决基本问题,还允许进行简单的硬件实现。通过将受神经启发的储层计算方法简化到其基本要素,我们发现简单动力系统的非线性瞬态响应能够以前所未有的速度以优异的性能处理信息。我们具体探索了不同的硬件实现方式,并借此了解非线性、噪声、系统响应、连接结构以及投影到所需高维状态空间的质量的作用。除了对理解基本机制具有重要意义外,该方案还开启了以前的方法无法解决的直接技术机遇。