• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于Tiki-Taka算法的模拟深度学习加速器的保留感知零移位技术

Retention-aware zero-shifting technique for Tiki-Taka algorithm-based analog deep learning accelerator.

作者信息

Noh Kyungmi, Kwak Hyunjeong, Son Jeonghoon, Kim Seungkun, Um Minseong, Kang Minil, Kim Doyoon, Ji Wonjae, Lee Junyong, Jo HwiJeong, Woo Jiyong, Lee Hyung-Min, Kim Seyoung

机构信息

Department of Materials Science and Engineering, Pohang University of Science and Technology, Pohang 37673, Republic of Korea.

School of Electrical Engineering, Korea University, Seoul 02841, Republic of Korea.

出版信息

Sci Adv. 2024 Jun 14;10(24):eadl3350. doi: 10.1126/sciadv.adl3350.

DOI:10.1126/sciadv.adl3350
PMID:38875324
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11177898/
Abstract

We present the fabrication of 4 K-scale electrochemical random-access memory (ECRAM) cross-point arrays for analog neural network training accelerator and an electrical characteristic of an 8 × 8 ECRAM array with a 100% yield, showing excellent switching characteristics, low cycle-to-cycle, and device-to-device variations. Leveraging the advances of the ECRAM array, we showcase its efficacy in neural network training using the Tiki-Taka version 2 algorithm (TTv2) tailored for non-ideal analog memory devices. Through an experimental study using ECRAM devices, we investigate the influence of retention characteristics on the training performance of TTv2, revealing that the relative location of the retention convergence point critically determines the available weight range and, consequently, affects the training accuracy. We propose a retention-aware zero-shifting technique designed to optimize neural network training performance, particularly in scenarios involving cross-point devices with limited retention times. This technique ensures robust and efficient analog neural network training despite the practical constraints posed by analog cross-point devices.

摘要

我们展示了用于模拟神经网络训练加速器的4K规模电化学随机存取存储器(ECRAM)交叉点阵列的制造,以及一个良率为100%的8×8 ECRAM阵列的电学特性,该阵列显示出优异的开关特性、低周期到周期以及器件到器件的变化。利用ECRAM阵列的进展,我们展示了其在使用针对非理想模拟存储器件定制的Tiki-Taka版本2算法(TTv2)进行神经网络训练中的功效。通过使用ECRAM器件的实验研究,我们研究了保持特性对TTv2训练性能的影响,揭示了保持收敛点的相对位置关键地决定了可用权重范围,进而影响训练精度。我们提出了一种保持感知零移位技术,旨在优化神经网络训练性能,特别是在涉及保持时间有限的交叉点器件的场景中。尽管模拟交叉点器件带来了实际限制,但该技术确保了强大而高效的模拟神经网络训练。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/480cce2f020d/sciadv.adl3350-f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/4829959d9db0/sciadv.adl3350-f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/27674c1cf9e3/sciadv.adl3350-f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/032352f91e26/sciadv.adl3350-f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/ef1c13d9488f/sciadv.adl3350-f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/e8ba1ae0828b/sciadv.adl3350-f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/480cce2f020d/sciadv.adl3350-f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/4829959d9db0/sciadv.adl3350-f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/27674c1cf9e3/sciadv.adl3350-f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/032352f91e26/sciadv.adl3350-f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/ef1c13d9488f/sciadv.adl3350-f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/e8ba1ae0828b/sciadv.adl3350-f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c3bd/11177898/480cce2f020d/sciadv.adl3350-f6.jpg

相似文献

1
Retention-aware zero-shifting technique for Tiki-Taka algorithm-based analog deep learning accelerator.基于Tiki-Taka算法的模拟深度学习加速器的保留感知零移位技术
Sci Adv. 2024 Jun 14;10(24):eadl3350. doi: 10.1126/sciadv.adl3350.
2
Enabling Training of Neural Networks on Noisy Hardware.在有噪声的硬件上实现神经网络训练。
Front Artif Intell. 2021 Sep 9;4:699148. doi: 10.3389/frai.2021.699148. eCollection 2021.
3
Impact of Asymmetric Weight Update on Neural Network Training With Tiki-Taka Algorithm.非对称权重更新对基于蒂基-塔卡算法的神经网络训练的影响。
Front Neurosci. 2022 Jan 6;15:767953. doi: 10.3389/fnins.2021.767953. eCollection 2021.
4
Analog Resistive Switching Devices for Training Deep Neural Networks with the Novel Tiki-Taka Algorithm.用于采用新型Tiki-Taka算法训练深度神经网络的模拟电阻开关器件
Nano Lett. 2024 Jan 24;24(3):866-872. doi: 10.1021/acs.nanolett.3c03697. Epub 2024 Jan 11.
5
Algorithm for Training Neural Networks on Resistive Device Arrays.用于在电阻式器件阵列上训练神经网络的算法
Front Neurosci. 2020 Feb 26;14:103. doi: 10.3389/fnins.2020.00103. eCollection 2020.
6
Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory.基于电化学随机存取存储器的模拟神经网络并行训练
Front Neurosci. 2021 Apr 8;15:636127. doi: 10.3389/fnins.2021.636127. eCollection 2021.
7
On-Chip Integrated Atomically Thin 2D Material Heater as a Training Accelerator for an Electrochemical Random-Access Memory Synapse for Neuromorphic Computing Application.用于神经形态计算应用的电化学随机存取存储器突触训练加速器的片上集成原子级薄二维材料加热器
ACS Nano. 2022 Aug 23;16(8):12214-12225. doi: 10.1021/acsnano.2c02913. Epub 2022 Jul 19.
8
Electrochemical random-access memory: recent advances in materials, devices, and systems towards neuromorphic computing.电化学随机存取存储器:面向神经形态计算的材料、器件及系统的最新进展
Nano Converg. 2024 Feb 28;11(1):9. doi: 10.1186/s40580-024-00415-8.
9
Device-Algorithm Co-Optimization for an On-Chip Trainable Capacitor-Based Synaptic Device with IGZO TFT and Retention-Centric Tiki-Taka Algorithm.基于IGZO TFT的片上可训练电容式突触器件与以保持为中心的Tiki-Taka算法的器件-算法协同优化
Adv Sci (Weinh). 2023 Oct;10(29):e2303018. doi: 10.1002/advs.202303018. Epub 2023 Aug 9.
10
Open-loop analog programmable electrochemical memory array.开环模拟可编程电化学存储器阵列
Nat Commun. 2023 Oct 4;14(1):6184. doi: 10.1038/s41467-023-41958-4.

引用本文的文献

1
Unconventional Multimodal Switching in Single-Crystalline Nanowire Channel ECRAM.单晶纳米线通道电化学电阻式随机存取存储器中的非常规多模态切换
Small. 2025 Jul;21(30):e2504071. doi: 10.1002/smll.202504071. Epub 2025 Jun 23.

本文引用的文献

1
Device-Algorithm Co-Optimization for an On-Chip Trainable Capacitor-Based Synaptic Device with IGZO TFT and Retention-Centric Tiki-Taka Algorithm.基于IGZO TFT的片上可训练电容式突触器件与以保持为中心的Tiki-Taka算法的器件-算法协同优化
Adv Sci (Weinh). 2023 Oct;10(29):e2303018. doi: 10.1002/advs.202303018. Epub 2023 Aug 9.
2
Neural Network Training With Asymmetric Crosspoint Elements.使用非对称交叉点元件的神经网络训练
Front Artif Intell. 2022 May 9;5:891624. doi: 10.3389/frai.2022.891624. eCollection 2022.
3
Impact of Asymmetric Weight Update on Neural Network Training With Tiki-Taka Algorithm.
非对称权重更新对基于蒂基-塔卡算法的神经网络训练的影响。
Front Neurosci. 2022 Jan 6;15:767953. doi: 10.3389/fnins.2021.767953. eCollection 2021.
4
A crossbar array of magnetoresistive memory devices for in-memory computing.用于内存计算的磁阻式存储器件的交叉开关阵列。
Nature. 2022 Jan;601(7892):211-216. doi: 10.1038/s41586-021-04196-6. Epub 2022 Jan 12.
5
A fully hardware-based memristive multilayer neural network.一种完全基于硬件的忆阻式多层神经网络。
Sci Adv. 2021 Nov 26;7(48):eabj4801. doi: 10.1126/sciadv.abj4801. Epub 2021 Nov 24.
6
Enabling Training of Neural Networks on Noisy Hardware.在有噪声的硬件上实现神经网络训练。
Front Artif Intell. 2021 Sep 9;4:699148. doi: 10.3389/frai.2021.699148. eCollection 2021.
7
Filament-Free Bulk Resistive Memory Enables Deterministic Analogue Switching.无细丝体电阻式存储器实现确定性模拟开关切换。
Adv Mater. 2020 Nov;32(45):e2003984. doi: 10.1002/adma.202003984. Epub 2020 Sep 22.
8
Memory devices and applications for in-memory computing.用于内存计算的存储设备和应用。
Nat Nanotechnol. 2020 Jul;15(7):529-544. doi: 10.1038/s41565-020-0655-z. Epub 2020 Mar 30.
9
Algorithm for Training Neural Networks on Resistive Device Arrays.用于在电阻式器件阵列上训练神经网络的算法
Front Neurosci. 2020 Feb 26;14:103. doi: 10.3389/fnins.2020.00103. eCollection 2020.
10
Fully hardware-implemented memristor convolutional neural network.全硬件实现的忆阻器卷积神经网络。
Nature. 2020 Jan;577(7792):641-646. doi: 10.1038/s41586-020-1942-4. Epub 2020 Jan 29.