Research Institute of Electrical Communication, Tohoku University, Sendai 980-8577, Japan.
Graduate School of Biomedical Engineering, Tohoku University, Sendai 980-8579, Japan.
Proc Natl Acad Sci U S A. 2023 Jun 20;120(25):e2217008120. doi: 10.1073/pnas.2217008120. Epub 2023 Jun 12.
Reservoir computing is a machine learning paradigm that transforms the transient dynamics of high-dimensional nonlinear systems for processing time-series data. Although the paradigm was initially proposed to model information processing in the mammalian cortex, it remains unclear how the nonrandom network architecture, such as the modular architecture, in the cortex integrates with the biophysics of living neurons to characterize the function of biological neuronal networks (BNNs). Here, we used optogenetics and calcium imaging to record the multicellular responses of cultured BNNs and employed the reservoir computing framework to decode their computational capabilities. Micropatterned substrates were used to embed the modular architecture in the BNNs. We first show that the dynamics of modular BNNs in response to static inputs can be classified with a linear decoder and that the modularity of the BNNs positively correlates with the classification accuracy. We then used a timer task to verify that BNNs possess a short-term memory of several 100 ms and finally show that this property can be exploited for spoken digit classification. Interestingly, BNN-based reservoirs allow categorical learning, wherein a network trained on one dataset can be used to classify separate datasets of the same category. Such classification was not possible when the inputs were directly decoded by a linear decoder, suggesting that BNNs act as a generalization filter to improve reservoir computing performance. Our findings pave the way toward a mechanistic understanding of information representation within BNNs and build future expectations toward the realization of physical reservoir computing systems based on BNNs.
储层计算是一种机器学习范例,它可以转换高维非线性系统的瞬态动力学,用于处理时间序列数据。尽管该范例最初是为了模拟哺乳动物大脑皮层中的信息处理而提出的,但目前尚不清楚皮层中非随机的网络结构(如模块化结构)如何与活体神经元的生物物理特性相结合,以表征生物神经网络(BNNs)的功能。在这里,我们使用光遗传学和钙成像技术记录培养的 BNN 的多细胞反应,并采用储层计算框架来解码它们的计算能力。微图案化基板用于将模块化架构嵌入 BNN 中。我们首先表明,模块化 BNN 对静态输入的响应动力学可以用线性解码器进行分类,并且 BNN 的模块化与分类精度呈正相关。然后,我们使用定时任务来验证 BNN 具有数 100ms 的短期记忆,最后表明该特性可用于语音数字分类。有趣的是,基于 BNN 的储层允许进行分类学习,即可以使用在一个数据集上训练的网络来对同一类别的单独数据集进行分类。当直接通过线性解码器对输入进行解码时,这种分类是不可能的,这表明 BNN 作为一种泛化滤波器,可以提高储层计算的性能。我们的研究结果为理解 BNN 内部的信息表示提供了一种机制,并为基于 BNN 的物理储层计算系统的实现奠定了基础。