Suppr超能文献

基于脉冲神经网络的时空数据流高效处理

Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks.

作者信息

Kugele Alexander, Pfeil Thomas, Pfeiffer Michael, Chicca Elisabetta

机构信息

Faculty of Technology and Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany.

Bosch Center for Artificial Intelligence, Renningen, Germany.

出版信息

Front Neurosci. 2020 May 5;14:439. doi: 10.3389/fnins.2020.00439. eCollection 2020.

Abstract

Spiking neural networks (SNNs) are potentially highly efficient models for inference on fully parallel neuromorphic hardware, but existing training methods that convert conventional artificial neural networks (ANNs) into SNNs are unable to exploit these advantages. Although ANN-to-SNN conversion has achieved state-of-the-art accuracy for static image classification tasks, the following subtle but important difference in the way SNNs and ANNs integrate information over time makes the direct application of conversion techniques for sequence processing tasks challenging. Whereas all connections in SNNs have a certain propagation delay larger than zero, ANNs assign different roles to feed-forward connections, which immediately update all neurons within the same time step, and recurrent connections, which have to be rolled out in time and are typically assigned a delay of one time step. Here, we present a novel method to obtain highly accurate SNNs for sequence processing by modifying the ANN training before conversion, such that delays induced by ANN rollouts match the propagation delays in the targeted SNN implementation. Our method builds on the recently introduced framework of streaming rollouts, which aims for fully parallel model execution of ANNs and inherently allows for temporal integration by merging paths of different delays between input and output of the network. The resulting networks achieve state-of-the-art accuracy for multiple event-based benchmark datasets, including N-MNIST, CIFAR10-DVS, N-CARS, and DvsGesture, and through the use of spatio-temporal shortcut connections yield low-latency approximate network responses that improve over time as more of the input sequence is processed. In addition, our converted SNNs are consistently more energy-efficient than their corresponding ANNs.

摘要

脉冲神经网络(SNN)在完全并行的神经形态硬件上进行推理时可能是高效的模型,但现有的将传统人工神经网络(ANN)转换为SNN的训练方法无法利用这些优势。尽管ANN到SNN的转换在静态图像分类任务中已达到了当前的先进精度,但SNN和ANN在信息随时间整合方式上存在以下细微但重要的差异,这使得直接将转换技术应用于序列处理任务具有挑战性。在SNN中,所有连接都有大于零的特定传播延迟,而ANN则将不同的角色分配给前馈连接(在同一时间步立即更新所有神经元)和循环连接(必须及时展开,通常分配一个时间步的延迟)。在此,我们提出一种新颖的方法,通过在转换前修改ANN训练来获得用于序列处理的高精度SNN,使得ANN展开引起的延迟与目标SNN实现中的传播延迟相匹配。我们的方法基于最近引入的流展开框架,该框架旨在实现ANN的完全并行模型执行,并通过合并网络输入和输出之间不同延迟的路径固有地允许时间整合。由此产生的网络在多个基于事件的基准数据集(包括N-MNIST、CIFAR10-DVS、N-CARS和DvsGesture)上达到了当前的先进精度,并且通过使用时空捷径连接产生低延迟的近似网络响应,并随着更多输入序列被处理而随时间改进。此外,我们转换后的SNN始终比其对应的ANN更节能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83e1/7214871/15d671f6415b/fnins-14-00439-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验