Suppr超能文献

将连续值深度网络转换为用于图像分类的高效事件驱动网络

Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.

作者信息

Rueckauer Bodo, Lungu Iulia-Alexandra, Hu Yuhuang, Pfeiffer Michael, Liu Shih-Chii

机构信息

Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.

Bosch Center for Artificial Intelligence, Renningen, Germany.

出版信息

Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.

Abstract

neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

摘要

神经网络(SNN)可能提供一种高效的推理方式,因为网络中的神经元被稀疏激活且计算是事件驱动的。先前的工作表明,简单的连续值深度卷积神经网络(CNN)可以转换为精确的脉冲等效网络。这些网络不包括某些常见操作,如最大池化、softmax、批量归一化和Inception模块。因此,本文提出了这些操作的脉冲等效形式,从而允许转换几乎任意的CNN架构。我们展示了将流行的CNN架构,包括VGG - 16和Inception - v3,转换为SNN,这些SNN在MNIST、CIFAR - 10和具有挑战性的ImageNet数据集上产生了迄今为止报道的最佳结果。SNN可以在分类错误率和可用操作数量之间进行权衡,而深度连续值神经网络需要固定数量的操作来达到其分类错误率。从用于MNIST的LeNet和用于CIFAR - 10的BinaryNet的例子中,我们表明,随着错误率增加几个百分点,与原始CNN相比,SNN可以实现超过2倍的操作减少。这突出了SNN的潜力,特别是当部署在节能的神经形态脉冲神经元芯片上用于嵌入式应用时。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验