• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

符号反向传播:一种用于模拟 RRAM 神经形态计算系统的片上学习算法。

Sign backpropagation: An on-chip learning algorithm for analog RRAM neuromorphic computing systems.

机构信息

Institute of Microelectronics, Tsinghua University, Beijing, 10084, China.

Institute of Microelectronics, Tsinghua University, Beijing, 10084, China; Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, 10084, China.

出版信息

Neural Netw. 2018 Dec;108:217-223. doi: 10.1016/j.neunet.2018.08.012. Epub 2018 Sep 1.

DOI:10.1016/j.neunet.2018.08.012
PMID:30216871
Abstract

Currently, powerful deep learning models usually require significant resources in the form of processors and memory, which leads to very high energy consumption. The emerging resistive random access memory (RRAM) has shown great potential for constructing a scalable and energy-efficient neural network. However, it is hard to port a high-precision neural network from conventional digital CMOS hardware systems to analog RRAM systems owing to the variability of RRAM devices. A suitable on-chip learning algorithm should be developed to retrain or improve the performance of the neural network. In addition, determining how to integrate the periphery digital computations and analog RRAM crossbar is still a challenge. Here, we propose an on-chip learning algorithm, named sign backpropagation (SBP), for RRAM-based multilayer perceptron (MLP) with binary interfaces (0, 1) in forward process and 2-bit (±1, 0) in backward process. The simulation results show that the proposed method and architecture can achieve a comparable classification accuracy with MLP on MNIST dataset, meanwhile it can save area and energy cost by the calculation and storing of the intermediate results and take advantages of the RRAM crossbar potential in neuromorphic computing.

摘要

目前,功能强大的深度学习模型通常需要以处理器和内存的形式提供大量资源,这导致了非常高的能耗。新兴的电阻式随机存取存储器(RRAM)在构建可扩展和高能效的神经网络方面显示出巨大的潜力。然而,由于 RRAM 器件的可变性,很难将高精度神经网络从传统的数字 CMOS 硬件系统移植到模拟 RRAM 系统。应该开发合适的片上学习算法来重新训练或提高神经网络的性能。此外,确定如何集成外围数字计算和模拟 RRAM 交叉点仍然是一个挑战。在这里,我们提出了一种用于基于 RRAM 的多层感知机(MLP)的片上学习算法,称为符号反向传播(SBP),其在正向过程中具有二进制接口(0,1),在反向过程中具有 2 位(±1,0)。仿真结果表明,所提出的方法和架构可以在 MNIST 数据集上与 MLP 实现可比的分类精度,同时通过中间结果的计算和存储可以节省面积和能耗,并利用 RRAM 交叉点在神经形态计算中的潜力。

相似文献

1
Sign backpropagation: An on-chip learning algorithm for analog RRAM neuromorphic computing systems.符号反向传播:一种用于模拟 RRAM 神经形态计算系统的片上学习算法。
Neural Netw. 2018 Dec;108:217-223. doi: 10.1016/j.neunet.2018.08.012. Epub 2018 Sep 1.
2
Unsupervised Learning on Resistive Memory Array Based Spiking Neural Networks.基于电阻式记忆阵列的脉冲神经网络的无监督学习
Front Neurosci. 2019 Aug 6;13:812. doi: 10.3389/fnins.2019.00812. eCollection 2019.
3
Modeling of Self-Aligned Selector Based on Ultra-Thin Metal Oxide for Resistive Random-Access Memory (RRAM) Crossbar Arrays.基于超薄金属氧化物的自对准选择器用于电阻式随机存取存储器(RRAM)交叉阵列的建模
Nanomaterials (Basel). 2024 Apr 12;14(8):668. doi: 10.3390/nano14080668.
4
Resistive random access memory: introduction to device mechanism, materials and application to neuromorphic computing.电阻式随机存取存储器:器件机制、材料介绍及其在神经形态计算中的应用
Discov Nano. 2023 Mar 9;18(1):36. doi: 10.1186/s11671-023-03775-y.
5
RRAM-based synapse devices for neuromorphic systems.基于 RRAM 的用于神经形态系统的突触器件。
Faraday Discuss. 2019 Feb 18;213(0):421-451. doi: 10.1039/c8fd00127h.
6
Weighted Synapses Without Carry Operations for RRAM-Based Neuromorphic Systems.用于基于阻变随机存取存储器的神经形态系统的无进位运算加权突触
Front Neurosci. 2018 Mar 16;12:167. doi: 10.3389/fnins.2018.00167. eCollection 2018.
7
Analogue pattern recognition with stochastic switching binary CMOS-integrated memristive devices.基于随机开关二进制 CMOS 集成忆阻器件的模拟模式识别。
Sci Rep. 2020 Sep 2;10(1):14450. doi: 10.1038/s41598-020-71334-x.
8
Resistive Switching Devices for Neuromorphic Computing: From Foundations to Chip Level Innovations.用于神经形态计算的电阻式开关器件:从基础到芯片级创新
Nanomaterials (Basel). 2024 Mar 15;14(6):527. doi: 10.3390/nano14060527.
9
Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges.基于全铁电场效应晶体管的脉冲神经网络中的监督学习:机遇与挑战。
Front Neurosci. 2020 Jun 24;14:634. doi: 10.3389/fnins.2020.00634. eCollection 2020.
10
FangTianSim: High-Level Cycle-Accurate Resistive Random-Access Memory-Based Multi-Core Spiking Neural Network Processor Simulator.方天模拟器:基于高精度循环的电阻式随机存取存储器的多核脉冲神经网络处理器模拟器。
Front Neurosci. 2022 Jan 20;15:806325. doi: 10.3389/fnins.2021.806325. eCollection 2021.

引用本文的文献

1
Enhancing in-situ updates of quantized memristor neural networks: a Siamese network learning approach.增强量化忆阻器神经网络的原位更新:一种暹罗网络学习方法。
Cogn Neurodyn. 2024 Aug;18(4):2047-2059. doi: 10.1007/s11571-024-10069-1. Epub 2024 Feb 13.
2
Hardware implementation of memristor-based artificial neural networks.基于忆阻器的人工神经网络的硬件实现。
Nat Commun. 2024 Mar 4;15(1):1974. doi: 10.1038/s41467-024-45670-9.
3
Toward memristive in-memory computing: principles and applications.迈向忆阻式内存计算:原理与应用
Front Optoelectron. 2022 May 12;15(1):23. doi: 10.1007/s12200-022-00025-4.
4
Memristor-based analogue computing for brain-inspired sound localization with in situ training.基于忆阻器的模拟计算用于具有原位训练的脑启发式声音定位。
Nat Commun. 2022 Apr 19;13(1):2026. doi: 10.1038/s41467-022-29712-8.
5
On-Chip Training Spiking Neural Networks Using Approximated Backpropagation With Analog Synaptic Devices.使用带有模拟突触器件的近似反向传播的片上训练脉冲神经网络。
Front Neurosci. 2020 Jul 7;14:423. doi: 10.3389/fnins.2020.00423. eCollection 2020.