Suppr超能文献

符号反向传播:一种用于模拟 RRAM 神经形态计算系统的片上学习算法。

Sign backpropagation: An on-chip learning algorithm for analog RRAM neuromorphic computing systems.

机构信息

Institute of Microelectronics, Tsinghua University, Beijing, 10084, China.

Institute of Microelectronics, Tsinghua University, Beijing, 10084, China; Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, 10084, China.

出版信息

Neural Netw. 2018 Dec;108:217-223. doi: 10.1016/j.neunet.2018.08.012. Epub 2018 Sep 1.

Abstract

Currently, powerful deep learning models usually require significant resources in the form of processors and memory, which leads to very high energy consumption. The emerging resistive random access memory (RRAM) has shown great potential for constructing a scalable and energy-efficient neural network. However, it is hard to port a high-precision neural network from conventional digital CMOS hardware systems to analog RRAM systems owing to the variability of RRAM devices. A suitable on-chip learning algorithm should be developed to retrain or improve the performance of the neural network. In addition, determining how to integrate the periphery digital computations and analog RRAM crossbar is still a challenge. Here, we propose an on-chip learning algorithm, named sign backpropagation (SBP), for RRAM-based multilayer perceptron (MLP) with binary interfaces (0, 1) in forward process and 2-bit (±1, 0) in backward process. The simulation results show that the proposed method and architecture can achieve a comparable classification accuracy with MLP on MNIST dataset, meanwhile it can save area and energy cost by the calculation and storing of the intermediate results and take advantages of the RRAM crossbar potential in neuromorphic computing.

摘要

目前,功能强大的深度学习模型通常需要以处理器和内存的形式提供大量资源,这导致了非常高的能耗。新兴的电阻式随机存取存储器(RRAM)在构建可扩展和高能效的神经网络方面显示出巨大的潜力。然而,由于 RRAM 器件的可变性,很难将高精度神经网络从传统的数字 CMOS 硬件系统移植到模拟 RRAM 系统。应该开发合适的片上学习算法来重新训练或提高神经网络的性能。此外,确定如何集成外围数字计算和模拟 RRAM 交叉点仍然是一个挑战。在这里,我们提出了一种用于基于 RRAM 的多层感知机(MLP)的片上学习算法,称为符号反向传播(SBP),其在正向过程中具有二进制接口(0,1),在反向过程中具有 2 位(±1,0)。仿真结果表明,所提出的方法和架构可以在 MNIST 数据集上与 MLP 实现可比的分类精度,同时通过中间结果的计算和存储可以节省面积和能耗,并利用 RRAM 交叉点在神经形态计算中的潜力。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验