• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于高效深度神经网络的量子化磁畴壁突触

Quantized Magnetic Domain Wall Synapse for Efficient Deep Neural Networks.

作者信息

Dhull Seema, Misba Walid Al, Nisar Arshid, Atulasimha Jayasimha, Kaushik Brajesh Kumar

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4996-5005. doi: 10.1109/TNNLS.2024.3369969. Epub 2025 Feb 28.

DOI:10.1109/TNNLS.2024.3369969
PMID:38470601
Abstract

The quantization of synaptic weights using emerging nonvolatile memory (NVM) devices has emerged as a promising solution to implement computationally efficient neural networks on resource constrained hardware. However, the practical implementation of such synaptic weights is hampered by the imperfect memory characteristics, specifically the availability of limited number of quantized states and the presence of large intrinsic device variation and stochasticity involved in writing the synaptic states. This article presents ON-chip training and inference of a neural network using quantized magnetic domain wall (DW)-based synaptic array and CMOS peripheral circuits. A rigorous model of the magnetic DW device considering stochasticity and process variations has been utilized for the synapse. To achieve stable quantized weights, DW pinning has been achieved by means of physical constrictions. Finally, VGG8 architecture for CIFAR-10 image classification has been simulated by using the extracted synaptic device characteristics. The performance in terms of accuracy, energy, latency, and area consumption has been evaluated while considering the process variations and nonidealities in the DW device as well as the peripheral circuits. The proposed quantized neural network (QNN) architecture achieves efficient ON-chip learning with 92.4% and 90.4% training and inference accuracy, respectively. In comparison to pure CMOS-based design, it demonstrates an overall improvement in area, energy, and latency by , , and , respectively.

摘要

利用新兴的非易失性存储器(NVM)设备对突触权重进行量化,已成为在资源受限硬件上实现计算高效神经网络的一种有前途的解决方案。然而,这种突触权重的实际实现受到不完善的存储器特性的阻碍,特别是有限数量的量化状态的可用性以及写入突触状态时所涉及的大量固有器件变化和随机性。本文介绍了使用基于量化磁畴壁(DW)的突触阵列和CMOS外围电路对神经网络进行片上训练和推理。已将考虑随机性和工艺变化的磁DW器件的严格模型用于突触。为了实现稳定的量化权重,通过物理约束实现了DW钉扎。最后,利用提取的突触器件特性对用于CIFAR-10图像分类的VGG8架构进行了仿真。在考虑DW器件以及外围电路中的工艺变化和非理想性的同时,对准确性、能量、延迟和面积消耗方面的性能进行了评估。所提出的量化神经网络(QNN)架构分别以92.4%和90.4%的训练和推理准确率实现了高效的片上学习。与基于纯CMOS的设计相比,它在面积、能量和延迟方面分别有、和的总体改进。

相似文献

1
Quantized Magnetic Domain Wall Synapse for Efficient Deep Neural Networks.用于高效深度神经网络的量子化磁畴壁突触
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4996-5005. doi: 10.1109/TNNLS.2024.3369969. Epub 2025 Feb 28.
2
Pattern Classification Using Quantized Neural Networks for FPGA-Based Low-Power IoT Devices.基于量化神经网络的 FPGA 低功耗物联网设备的模式分类。
Sensors (Basel). 2022 Nov 10;22(22):8694. doi: 10.3390/s22228694.
3
Neural Network Training Acceleration With RRAM-Based Hybrid Synapses.基于阻变随机存取存储器(RRAM)的混合突触实现神经网络训练加速
Front Neurosci. 2021 Jun 24;15:690418. doi: 10.3389/fnins.2021.690418. eCollection 2021.
4
Investigation of Deep Spiking Neural Networks Utilizing Gated Schottky Diode as Synaptic Devices.利用肖特基二极管作为突触器件对深度脉冲神经网络的研究。
Micromachines (Basel). 2022 Oct 22;13(11):1800. doi: 10.3390/mi13111800.
5
Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.具有电路和训练优化时间下采样的尖峰CMOS-NVM混合信号神经形态卷积网络
Front Neurosci. 2023 Jul 18;17:1177592. doi: 10.3389/fnins.2023.1177592. eCollection 2023.
6
Memristive Quantized Neural Networks: A Novel Approach to Accelerate Deep Learning On-Chip.忆阻量化神经网络:一种加速芯片上深度学习的新方法。
IEEE Trans Cybern. 2021 Apr;51(4):1875-1887. doi: 10.1109/TCYB.2019.2912205. Epub 2021 Mar 17.
7
A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware.一种基于可重构神经形态硬件的散射与聚集脉冲卷积神经网络。
Front Neurosci. 2021 Nov 16;15:694170. doi: 10.3389/fnins.2021.694170. eCollection 2021.
8
Quantization Friendly MobileNet (QF-MobileNet) Architecture for Vision Based Applications on Embedded Platforms.面向嵌入式平台视觉应用的量化友好型 MobileNet(QF-MobileNet)架构。
Neural Netw. 2021 Apr;136:28-39. doi: 10.1016/j.neunet.2020.12.022. Epub 2020 Dec 29.
9
Enhancing in-situ updates of quantized memristor neural networks: a Siamese network learning approach.增强量化忆阻器神经网络的原位更新:一种暹罗网络学习方法。
Cogn Neurodyn. 2024 Aug;18(4):2047-2059. doi: 10.1007/s11571-024-10069-1. Epub 2024 Feb 13.
10
Exploiting Retraining-Based Mixed-Precision Quantization for Low-Cost DNN Accelerator Design.利用基于再训练的混合精度量化进行低成本深度神经网络加速器设计。
IEEE Trans Neural Netw Learn Syst. 2021 Jul;32(7):2925-2938. doi: 10.1109/TNNLS.2020.3008996. Epub 2021 Jul 6.

引用本文的文献

1
All-Electrical Control of Spin Synapses for Neuromorphic Computing: Bridging Multi-State Memory with Quantization for Efficient Neural Networks.用于神经形态计算的自旋突触全电控制:通过量化将多态存储器与高效神经网络相连接。
Adv Sci (Weinh). 2025 Jun;12(22):e2417735. doi: 10.1002/advs.202417735. Epub 2025 Apr 26.