Graduate School of Engineering, The University of Tokyo, 113-8656 Tokyo, Japan; International Research Center for Neurointelligence, The University of Tokyo, 113-0033 Tokyo, Japan.
Graduate School of Engineering, The University of Tokyo, 113-8656 Tokyo, Japan; International Research Center for Neurointelligence, The University of Tokyo, 113-0033 Tokyo, Japan.
Neural Netw. 2021 Nov;143:550-563. doi: 10.1016/j.neunet.2021.06.031. Epub 2021 Jul 6.
Reservoir computing is a machine learning framework derived from a special type of recurrent neural network. Following recent advances in physical reservoir computing, some reservoir computing devices are thought to be promising as energy-efficient machine learning hardware for real-time information processing. To realize efficient online learning with low-power reservoir computing devices, it is beneficial to develop fast convergence learning methods with simpler operations. This study proposes a training method located in the middle between the recursive least squares (RLS) method and the least mean squares (LMS) method, which are standard online learning methods for reservoir computing models. The RLS method converges fast but requires updates of a huge matrix called a gain matrix, whereas the LMS method does not use a gain matrix but converges very slow. On the other hand, the proposed method called a transfer-RLS method does not require updates of the gain matrix in the main-training phase by updating that in advance (i.e., in a pre-training phase). As a result, the transfer-RLS method can work with simpler operations than the original RLS method without sacrificing much convergence speed. We numerically and analytically show that the transfer-RLS method converges much faster than the LMS method. Furthermore, we show that a modified version of the transfer-RLS method (called transfer-FORCE learning) can be applied to the first-order reduced and controlled error (FORCE) learning for a reservoir computing model with a closed-loop, which is challenging to train.
储层计算是一种机器学习框架,源自一种特殊类型的递归神经网络。在物理储层计算的最新进展之后,一些储层计算设备被认为是有前途的节能机器学习硬件,用于实时信息处理。为了实现具有低功率储层计算设备的高效在线学习,开发具有更简单操作的快速收敛学习方法是有益的。本研究提出了一种位于递归最小二乘法(RLS)和最小均方(LMS)之间的训练方法,这是储层计算模型的标准在线学习方法。RLS 方法收敛速度快,但需要更新一个称为增益矩阵的巨大矩阵,而 LMS 方法不使用增益矩阵,但收敛速度非常慢。另一方面,所提出的称为转移 RLS 方法的方法通过预先更新(即在预训练阶段)而不在主训练阶段更新增益矩阵。结果,转移 RLS 方法可以用比原始 RLS 方法更简单的操作工作,而不会牺牲太多的收敛速度。我们通过数值和分析表明,转移 RLS 方法比 LMS 方法收敛速度快得多。此外,我们表明,转移 RLS 方法的修改版本(称为转移 FORCE 学习)可应用于具有闭环的储层计算模型的一阶简化和受控误差(FORCE)学习,这是具有挑战性的训练。