Ma Haochun, Prosperino Davide, Räth Christoph
Department of Physics, Ludwig-Maximilians-Universität, Schellingstraße 4, 80799, Munich, Germany.
Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für KI Sicherheit, Wilhelm-Runge-Straße 10, 89081, Ulm, Germany.
Sci Rep. 2023 Aug 10;13(1):12970. doi: 10.1038/s41598-023-39886-w.
Reservoir computers are powerful machine learning algorithms for predicting nonlinear systems. Unlike traditional feedforward neural networks, they work on small training data sets, operate with linear optimization, and therefore require minimal computational resources. However, the traditional reservoir computer uses random matrices to define the underlying recurrent neural network and has a large number of hyperparameters that need to be optimized. Recent approaches show that randomness can be taken out by running regressions on a large library of linear and nonlinear combinations constructed from the input data and their time lags and polynomials thereof. However, for high-dimensional and nonlinear data, the number of these combinations explodes. Here, we show that a few simple changes to the traditional reservoir computer architecture further minimizing computational resources lead to significant and robust improvements in short- and long-term predictive performances compared to similar models while requiring minimal sizes of training data sets.
水库计算机是用于预测非线性系统的强大机器学习算法。与传统的前馈神经网络不同,它们在小训练数据集上运行,通过线性优化进行操作,因此所需的计算资源最少。然而,传统的水库计算机使用随机矩阵来定义底层递归神经网络,并且有大量超参数需要优化。最近的方法表明,可以通过对由输入数据及其时间滞后和多项式构成的大量线性和非线性组合库进行回归来消除随机性。然而,对于高维和非线性数据,这些组合的数量会激增。在这里,我们表明,对传统水库计算机架构进行一些简单的更改,进一步最小化计算资源,与类似模型相比,在短期和长期预测性能方面会带来显著且稳健的改进,同时所需的训练数据集规模最小。