Barbier Thomas, Teulière Céline, Triesch Jochen
SIGMA Clermont, Centre National de la Recherche Scientifique, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, France.
Life- and Neurosciences, Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.
Front Robot AI. 2025 Jan 15;11:1435197. doi: 10.3389/frobt.2024.1435197. eCollection 2024.
Biological vision systems simultaneously learn to efficiently encode their visual inputs and to control the movements of their eyes based on the visual input they sample. This autonomous joint learning of visual representations and actions has previously been modeled in the Active Efficient Coding (AEC) framework and implemented using traditional frame-based cameras. However, modern event-based cameras are inspired by the retina and offer advantages in terms of acquisition rate, dynamic range, and power consumption. Here, we propose a first AEC system that is fully implemented as a Spiking Neural Network (SNN) driven by inputs from an event-based camera. This input is efficiently encoded by a two-layer SNN, which in turn feeds into a spiking reinforcement learner that learns motor commands to maximize an intrinsic reward signal. This reward signal is computed directly from the activity levels of the first two layers. We test our approach on two different behaviors: visual tracking of a translating target and stabilizing the orientation of a rotating target. To the best of our knowledge, our work represents the first ever fully spiking AEC model.
生物视觉系统同时学习有效地编码其视觉输入,并根据所采样的视觉输入来控制眼睛的运动。这种视觉表征和动作的自主联合学习此前已在主动高效编码(AEC)框架中建模,并使用传统的基于帧的相机来实现。然而,现代基于事件的相机受到视网膜的启发,在采集速率、动态范围和功耗方面具有优势。在此,我们提出了首个完全作为基于事件相机输入驱动的脉冲神经网络(SNN)实现的AEC系统。该输入由一个两层SNN进行高效编码,该SNN进而输入到一个脉冲强化学习器中,该学习器学习运动命令以最大化一个内在奖励信号。这个奖励信号直接根据前两层的活动水平来计算。我们在两种不同行为上测试了我们的方法:平移目标的视觉跟踪和旋转目标方向的稳定。据我们所知,我们的工作代表了首个完全脉冲式的AEC模型。