Suppr超能文献

使用时间编码方法为强化学习任务训练脉冲神经网络。

Training Spiking Neural Networks for Reinforcement Learning Tasks With Temporal Coding Method.

作者信息

Wu Guanlin, Liang Dongchen, Luan Shaotong, Wang Ji

机构信息

Academy of Military Science, Beijing, China.

Nanhu Laboratory, Jiaxing, China.

出版信息

Front Neurosci. 2022 Aug 17;16:877701. doi: 10.3389/fnins.2022.877701. eCollection 2022.

Abstract

Recent years witness an increasing demand for using spiking neural networks (SNNs) to implement artificial intelligent systems. There is a demand of combining SNNs with reinforcement learning architectures to find an effective training method. Recently, temporal coding method has been proposed to train spiking neural networks while preserving the asynchronous nature of spiking neurons to preserve the asynchronous nature of SNNs. We propose a training method that enables temporal coding method in RL tasks. To tackle the problem of high sparsity of spikes, we introduce a self-incremental variable to push each spiking neuron to fire, which makes SNNs fully differentiable. In addition, an encoding method is proposed to solve the problem of information loss of temporal-coded inputs. The experimental results show that the SNNs trained by our proposed method can achieve comparable performance of the state-of-the-art artificial neural networks in benchmark tasks of reinforcement learning.

摘要

近年来,使用脉冲神经网络(SNN)来实现人工智能系统的需求日益增长。人们需要将SNN与强化学习架构相结合,以找到一种有效的训练方法。最近,已经提出了时间编码方法来训练脉冲神经网络,同时保留脉冲神经元的异步特性以保持SNN的异步性质。我们提出了一种在强化学习任务中启用时间编码方法的训练方法。为了解决脉冲的高稀疏性问题,我们引入了一个自增量变量来促使每个脉冲神经元放电,这使得SNN完全可微。此外,还提出了一种编码方法来解决时间编码输入的信息丢失问题。实验结果表明,通过我们提出的方法训练的SNN在强化学习的基准任务中可以达到与最先进的人工神经网络相当的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bde0/9428400/b6ee3f94ee44/fnins-16-877701-g0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验