• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

忆阻量化神经网络:一种加速芯片上深度学习的新方法。

Memristive Quantized Neural Networks: A Novel Approach to Accelerate Deep Learning On-Chip.

出版信息

IEEE Trans Cybern. 2021 Apr;51(4):1875-1887. doi: 10.1109/TCYB.2019.2912205. Epub 2021 Mar 17.

DOI:10.1109/TCYB.2019.2912205
PMID:31059463
Abstract

Existing deep neural networks (DNNs) are computationally expensive and memory intensive, which hinder their further deployment in novel nanoscale devices and applications with lower memory resources or strict latency requirements. In this paper, a novel approach to accelerate on-chip learning systems using memristive quantized neural networks (M-QNNs) is presented. A real problem of multilevel memristive synaptic weights due to device-to-device (D2D) and cycle-to-cycle (C2C) variations is considered. Different levels of Gaussian noise are added to the memristive model during each adjustment. Another method of using memristors with binary states to build M-QNNs is presented, which suffers from fewer D2D and C2C variations compared with using multilevel memristors. Furthermore, methods of solving the sneak path issues in the memristive crossbar arrays are proposed. The M-QNN approach is evaluated on two image classification datasets, that is, ten-digit number and handwritten images of mixed National Institute of Standards and Technology (MNIST). In addition, input images with different levels of zero-mean Gaussian noise are tested to verify the robustness of the proposed method. Another highlight of the proposed method is that it can significantly reduce computational time and memory during the process of image recognition.

摘要

现有的深度神经网络(DNN)计算成本高,内存消耗大,这阻碍了它们在具有较低内存资源或严格延迟要求的新型纳米设备和应用中的进一步部署。在本文中,提出了一种使用忆阻器量化神经网络(M-QNN)加速片上学习系统的新方法。考虑了由于器件到器件(D2D)和循环到循环(C2C)变化而导致的多级忆阻突触权重的实际问题。在每次调整期间,向忆阻器模型中添加不同水平的高斯噪声。还提出了一种使用具有二进制状态的忆阻器构建 M-QNN 的方法,与使用多级忆阻器相比,该方法受 D2D 和 C2C 变化的影响较小。此外,还提出了在忆阻器交叉阵列中解决 sneak path 问题的方法。在两个图像分类数据集(即十位数字和混合国家标准与技术研究所(MNIST)的手写图像)上评估了 M-QNN 方法。此外,还测试了具有不同零均值高斯噪声水平的输入图像,以验证所提出方法的鲁棒性。所提出方法的另一个亮点是它可以在图像识别过程中显著减少计算时间和内存。

相似文献

1
Memristive Quantized Neural Networks: A Novel Approach to Accelerate Deep Learning On-Chip.忆阻量化神经网络:一种加速芯片上深度学习的新方法。
IEEE Trans Cybern. 2021 Apr;51(4):1875-1887. doi: 10.1109/TCYB.2019.2912205. Epub 2021 Mar 17.
2
Automatic Learning Rate Adaption for Memristive Deep Learning Systems.忆阻式深度学习系统的自动学习率自适应
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10791-10802. doi: 10.1109/TNNLS.2023.3244006. Epub 2024 Aug 5.
3
Intrinsic variation effect in memristive neural network with weight quantization.具有权重量化的忆阻神经网络中的内在变化效应
Nanotechnology. 2022 Jun 24;33(37). doi: 10.1088/1361-6528/ac7651.
4
Dynamical memristive neural networks and associative self-learning architectures using biomimetic devices.使用仿生器件的动态忆阻神经网络及联想自学习架构
Front Neurosci. 2023 Apr 20;17:1153183. doi: 10.3389/fnins.2023.1153183. eCollection 2023.
5
Mixed-Precision Deep Learning Based on Computational Memory.基于计算内存的混合精度深度学习
Front Neurosci. 2020 May 12;14:406. doi: 10.3389/fnins.2020.00406. eCollection 2020.
6
Pattern Classification Using Quantized Neural Networks for FPGA-Based Low-Power IoT Devices.基于量化神经网络的 FPGA 低功耗物联网设备的模式分类。
Sensors (Basel). 2022 Nov 10;22(22):8694. doi: 10.3390/s22228694.
7
Adapted MLP-Mixer network based on crossbar arrays of fast and multilevel switching (Co-Fe-B)(LiNbO) nanocomposite memristors.基于(钴铁硼)(铌酸锂)纳米复合忆阻器快速多级开关交叉阵列的自适应多层感知器混合网络。
Nanoscale Horiz. 2024 Jan 29;9(2):238-247. doi: 10.1039/d3nh00421j.
8
Non-linear Memristive Synaptic Dynamics for Efficient Unsupervised Learning in Spiking Neural Networks.用于脉冲神经网络中高效无监督学习的非线性忆阻突触动力学
Front Neurosci. 2021 Feb 1;15:580909. doi: 10.3389/fnins.2021.580909. eCollection 2021.
9
Quantized Magnetic Domain Wall Synapse for Efficient Deep Neural Networks.用于高效深度神经网络的量子化磁畴壁突触
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4996-5005. doi: 10.1109/TNNLS.2024.3369969. Epub 2025 Feb 28.
10
Memristors for Neuromorphic Circuits and Artificial Intelligence Applications.用于神经形态电路和人工智能应用的忆阻器
Materials (Basel). 2020 Feb 20;13(4):938. doi: 10.3390/ma13040938.

引用本文的文献

1
Layer ensemble averaging for fault tolerance in memristive neural networks.忆阻神经网络中用于容错的层集成平均法。
Nat Commun. 2025 Feb 1;16(1):1250. doi: 10.1038/s41467-025-56319-6.
2
Research on the Impact of Data Density on Memristor Crossbar Architectures in Neuromorphic Pattern Recognition.数据密度对神经形态模式识别中忆阻器交叉阵列架构的影响研究。
Micromachines (Basel). 2023 Oct 27;14(11):1990. doi: 10.3390/mi14111990.
3
Thermally stable threshold selector based on CuAg alloy for energy-efficient memory and neuromorphic computing applications.
基于铜银合金的热稳定阈值选择器,用于节能型存储和神经形态计算应用。
Nat Commun. 2023 Jun 6;14(1):3285. doi: 10.1038/s41467-023-39033-z.
4
Noise and Memristance Variation Tolerance of Single Crossbar Architectures for Neuromorphic Image Recognition.用于神经形态图像识别的单交叉阵列架构的噪声和忆阻变化容限
Micromachines (Basel). 2021 Jun 13;12(6):690. doi: 10.3390/mi12060690.