Suppr超能文献

以及稀疏二元巧合(SBC)存储器:用于神经形态架构的快速、稳健学习与推理。

and Sparse Binary Coincidence (SBC) memories: Fast, robust learning and inference for neuromorphic architectures.

作者信息

Hopkins Michael, Fil Jakub, Jones Edward George, Furber Steve

机构信息

Advanced Processor Technologies Group, Department of Computer Science, The University of Manchester, Manchester, United Kingdom.

出版信息

Front Neuroinform. 2023 Mar 21;17:1125844. doi: 10.3389/fninf.2023.1125844. eCollection 2023.

Abstract

We present an innovative working mechanism (the ) and surrounding infrastructure () based upon a novel synthesis of ideas from sparse coding, computational neuroscience and information theory that enables fast and adaptive learning and accurate, robust inference. The mechanism is designed to be implemented efficiently on current and future neuromorphic devices as well as on more conventional CPU and memory architectures. An example implementation on the SpiNNaker neuromorphic platform has been developed and initial results are presented. The SBC memory stores coincidences between features detected in class examples in a training set, and infers the class of a previously unseen test example by identifying the class with which it shares the highest number of feature coincidences. A number of SBC memories may be combined in a to increase the diversity of the contributing feature coincidences. The resulting inference mechanism is shown to have excellent classification performance on benchmarks such as MNIST and EMNIST, achieving classification accuracy with single-pass learning approaching that of state-of-the-art deep networks with much larger tuneable parameter spaces and much higher training costs. It can also be made very robust to noise. is designed to be very efficient in training and inference on both conventional and neuromorphic architectures. It provides a unique combination of single-pass, single-shot and continuous supervised learning; following a very simple unsupervised phase. Accurate classification inference that is very robust against imperfect inputs has been demonstrated. These contributions make it uniquely well-suited for edge and IoT applications.

摘要

我们提出了一种创新的工作机制()及其周边基础设施(),该机制基于稀疏编码、计算神经科学和信息论的新颖思想综合,能够实现快速自适应学习以及准确、稳健的推理。该机制旨在能够在当前和未来的神经形态设备以及更传统的CPU和内存架构上高效实现。已在SpiNNaker神经形态平台上开发了一个示例实现,并展示了初步结果。SBC内存存储训练集中类别示例中检测到的特征之间的重合情况,并通过识别与之前未见过的测试示例共享最多特征重合的类别来推断其类别。多个SBC内存可以组合在一个 中,以增加贡献特征重合的多样性。结果表明,所得的推理机制在MNIST和EMNIST等基准测试中具有出色的分类性能,通过单通道学习实现的分类准确率接近具有大得多的可调参数空间和高得多训练成本的最先进深度网络。它对噪声也非常鲁棒。 旨在在传统架构和神经形态架构上的训练和推理中都非常高效。它提供了单通道、单次和连续监督学习的独特组合;在一个非常简单的无监督阶段之后。已经证明了对不完美输入具有很强鲁棒性的准确分类推理。这些贡献使其非常适合边缘和物联网应用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验