Krenzer Dominik, Bogdan Martin
Neuromorphic Information Processing, Leipzig University, Leipzig, Germany.
Center for Scalable Data Analytics and Artificial Intelligence, Leipzig, Germany.
Front Comput Neurosci. 2025 May 23;19:1569374. doi: 10.3389/fncom.2025.1569374. eCollection 2025.
Feedback and reinforcement signals in the brain act as natures sophisticated teaching tools, guiding neural circuits to self-organization, adaptation, and the encoding of complex patterns. This study investigates the impact of two feedback mechanisms within a deep liquid state machine architecture designed for spiking neural networks.
The Reinforced Liquid State Machine architecture integrates liquid layers, a winner-takes-all mechanism, a linear readout layer, and a novel reward-based reinforcement system to enhance learning efficacy. While traditional Liquid State Machines often employ unsupervised approaches, we introduce strict feedback to improve network performance by not only reinforcing correct predictions but also penalizing wrong ones.
Strict feedback is compared to another strategy known as forgiving feedback, excluding punishment, using evaluations on the Spiking Heidelberg data. Experimental results demonstrate that both feedback mechanisms significantly outperform the baseline unsupervised approach, achieving superior accuracy and adaptability in response to dynamic input patterns.
This comparative analysis highlights the potential of feedback integration in deepened Liquid State Machines, offering insights into optimizing spiking neural networks through reinforcement-driven architectures.
大脑中的反馈和强化信号充当着自然界精密的教学工具,引导神经回路进行自组织、适应以及对复杂模式进行编码。本研究调查了为脉冲神经网络设计的深度液态机器架构中两种反馈机制的影响。
强化液态机器架构整合了液态层、胜者全得机制、线性读出层以及一种新颖的基于奖励的强化系统,以提高学习效率。虽然传统液态机器通常采用无监督方法,但我们引入严格反馈,不仅通过强化正确预测,还通过惩罚错误预测来提高网络性能。
使用对海德堡脉冲数据的评估,将严格反馈与另一种称为宽容反馈(不包括惩罚)的策略进行比较。实验结果表明,两种反馈机制均显著优于基线无监督方法,在响应动态输入模式时实现了更高的准确性和适应性。
这种比较分析突出了在深化液态机器中整合反馈的潜力,为通过强化驱动架构优化脉冲神经网络提供了见解。