• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于全铁电场效应晶体管的脉冲神经网络中的监督学习:机遇与挑战。

Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges.

作者信息

Dutta Sourav, Schafer Clemens, Gomez Jorge, Ni Kai, Joshi Siddharth, Datta Suman

机构信息

Department of Electrical Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States.

Department of Computer Science and Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States.

出版信息

Front Neurosci. 2020 Jun 24;14:634. doi: 10.3389/fnins.2020.00634. eCollection 2020.

DOI:10.3389/fnins.2020.00634
PMID:32670012
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7327100/
Abstract

The two possible pathways toward artificial intelligence (AI)-(i) neuroscience-oriented neuromorphic computing [like spiking neural network (SNN)] and (ii) computer science driven machine learning (like deep learning) differ widely in their fundamental formalism and coding schemes (Pei et al., 2019). Deviating from traditional deep learning approach of relying on neuronal models with static nonlinearities, SNNs attempt to capture brain-like features like computation using spikes. This holds the promise of improving the energy efficiency of the computing platforms. In order to achieve a much higher areal and energy efficiency compared to today's hardware implementation of SNN, we need to go beyond the traditional route of relying on CMOS-based digital or mixed-signal neuronal circuits and segregation of computation and memory under the von Neumann architecture. Recently, ferroelectric field-effect transistors (FeFETs) are being explored as a promising alternative for building neuromorphic hardware by utilizing their non-volatile nature and rich polarization switching dynamics. In this work, we propose an all FeFET-based SNN hardware that allows low-power spike-based information processing and co-localized memory and computing (a.k.a. in-memory computing). We experimentally demonstrate the essential neuronal and synaptic dynamics in a 28 nm high-K metal gate FeFET technology. Furthermore, drawing inspiration from the traditional machine learning approach of optimizing a cost function to adjust the synaptic weights, we implement a surrogate gradient (SG) learning algorithm on our SNN platform that allows us to perform supervised learning on MNIST dataset. As such, we provide a pathway toward building energy-efficient neuromorphic hardware that can support traditional machine learning algorithms. Finally, we undertake synergistic device-algorithm co-design by accounting for the impacts of device-level variation (stochasticity) and limited bit precision of on-chip synaptic weights (available analog states) on the classification accuracy.

摘要

通往人工智能(AI)的两条可能途径——(i)面向神经科学的神经形态计算[如脉冲神经网络(SNN)]和(ii)计算机科学驱动的机器学习(如深度学习)——在其基本形式和编码方案上有很大差异(裴等人,2019年)。与依赖具有静态非线性的神经元模型的传统深度学习方法不同,SNN试图利用脉冲来捕捉类似大脑的特征,如计算。这有望提高计算平台的能源效率。为了实现比当今SNN的硬件实现更高的面积效率和能源效率,我们需要超越依赖基于CMOS的数字或混合信号神经元电路以及冯·诺依曼架构下计算与存储分离的传统途径。最近,铁电场效应晶体管(FeFET)因其非易失性和丰富的极化切换动力学,正被探索作为构建神经形态硬件的一种有前景的替代方案。在这项工作中,我们提出了一种全基于FeFET的SNN硬件,它允许基于低功耗脉冲的信息处理以及共定位的存储和计算(即内存计算)。我们通过实验证明了在28纳米高K金属栅FeFET技术中的基本神经元和突触动力学。此外,从优化成本函数以调整突触权重的传统机器学习方法中获得灵感,我们在我们的SNN平台上实现了一种替代梯度(SG)学习算法,使我们能够在MNIST数据集上进行监督学习。因此,我们提供了一条构建能够支持传统机器学习算法的节能神经形态硬件的途径。最后,我们通过考虑器件级变化(随机性)和片上突触权重的有限位精度(可用模拟状态)对分类精度的影响,进行了协同器件 - 算法协同设计。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9ab/7327100/70628201514d/fnins-14-00634-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9ab/7327100/0791a39edb2d/fnins-14-00634-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9ab/7327100/de2a97e2391b/fnins-14-00634-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9ab/7327100/68bb6d5a9281/fnins-14-00634-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9ab/7327100/706f7ad926d0/fnins-14-00634-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9ab/7327100/70628201514d/fnins-14-00634-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9ab/7327100/0791a39edb2d/fnins-14-00634-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9ab/7327100/de2a97e2391b/fnins-14-00634-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9ab/7327100/68bb6d5a9281/fnins-14-00634-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9ab/7327100/706f7ad926d0/fnins-14-00634-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b9ab/7327100/70628201514d/fnins-14-00634-g005.jpg

相似文献

1
Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges.基于全铁电场效应晶体管的脉冲神经网络中的监督学习:机遇与挑战。
Front Neurosci. 2020 Jun 24;14:634. doi: 10.3389/fnins.2020.00634. eCollection 2020.
2
Spiking neural networks for handwritten digit recognition-Supervised learning and network optimization.用于手写数字识别的尖峰神经网络-监督学习和网络优化。
Neural Netw. 2018 Jul;103:118-127. doi: 10.1016/j.neunet.2018.03.019. Epub 2018 Apr 6.
3
Memristors for Neuromorphic Circuits and Artificial Intelligence Applications.用于神经形态电路和人工智能应用的忆阻器
Materials (Basel). 2020 Feb 20;13(4):938. doi: 10.3390/ma13040938.
4
Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence.硬件尖峰神经元在嵌入式人工智能中的设计空间探索。
Neural Netw. 2020 Jan;121:366-386. doi: 10.1016/j.neunet.2019.09.024. Epub 2019 Sep 26.
5
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
6
Dual-Ferroelectric-Coupling-Engineered Two-Dimensional Transistors for Multifunctional In-Memory Computing.用于多功能内存计算的双铁电耦合工程二维晶体管
ACS Nano. 2022 Feb 22;16(2):3362-3372. doi: 10.1021/acsnano.2c00079. Epub 2022 Feb 11.
7
Neuromorphic Sentiment Analysis Using Spiking Neural Networks.基于尖峰神经网络的神经形态情绪分析。
Sensors (Basel). 2023 Sep 6;23(18):7701. doi: 10.3390/s23187701.
8
A Low-Power Spiking Neural Network Chip Based on a Compact LIF Neuron and Binary Exponential Charge Injector Synapse Circuits.基于紧凑型 LIF 神经元和二进制指数电荷注入突触电路的低功耗尖峰神经网络芯片。
Sensors (Basel). 2021 Jun 29;21(13):4462. doi: 10.3390/s21134462.
9
Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Spike 神经网络算法和神经形态硬件的进展。
Neural Comput. 2022 May 19;34(6):1289-1328. doi: 10.1162/neco_a_01499.
10
A Swarm Optimization Solver Based on Ferroelectric Spiking Neural Networks.一种基于铁电脉冲神经网络的群体优化求解器。
Front Neurosci. 2019 Aug 13;13:855. doi: 10.3389/fnins.2019.00855. eCollection 2019.

引用本文的文献

1
Neuromorphic Hebbian learning with magnetic tunnel junction synapses.基于磁性隧道结突触的神经形态赫布学习。
Commun Eng. 2025 Aug 4;4(1):142. doi: 10.1038/s44172-025-00479-2.
2
Reconfigurable neuromorphic functions in antiferroelectric transistors through coupled polarization switching and charge trapping dynamics.通过耦合极化切换和电荷俘获动力学实现反铁电晶体管中的可重构神经形态功能。
Nat Commun. 2025 May 11;16(1):4368. doi: 10.1038/s41467-025-59603-7.
3
Single-transistor organic electrochemical neurons.单晶体管有机电化学神经元。

本文引用的文献

1
Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence.硬件尖峰神经元在嵌入式人工智能中的设计空间探索。
Neural Netw. 2020 Jan;121:366-386. doi: 10.1016/j.neunet.2019.09.024. Epub 2019 Sep 26.
2
Towards artificial general intelligence with hybrid Tianjic chip architecture.用混合天机芯片架构实现通用人工智能。
Nature. 2019 Aug;572(7767):106-111. doi: 10.1038/s41586-019-1424-8. Epub 2019 Jul 31.
3
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.深入探索脉冲神经网络:VGG和残差架构。
Nat Commun. 2025 May 9;16(1):4334. doi: 10.1038/s41467-025-59587-4.
4
An Energy Efficient Memory Cell for Quantum and Neuromorphic Computing at Low Temperatures.一种用于低温量子和神经形态计算的节能存储单元。
Nano Lett. 2025 Apr 23;25(16):6374-6381. doi: 10.1021/acs.nanolett.4c05855. Epub 2025 Apr 13.
5
Evaluation of fluxon synapse device based on superconducting loops for energy efficient neuromorphic computing.基于超导回路的通量子突触器件用于高效能神经形态计算的评估
Front Neurosci. 2025 Feb 14;19:1511371. doi: 10.3389/fnins.2025.1511371. eCollection 2025.
6
Cryo-SIMPLY: A Reliable STT-MRAM-Based Smart Material Implication Architecture for In-Memory Computing.低温简易型:一种基于可靠的自旋转移力矩磁阻随机存取存储器的用于内存计算的智能材料应用架构。
Nanomaterials (Basel). 2024 Dec 25;15(1):9. doi: 10.3390/nano15010009.
7
Taming Prolonged Ionic Drift-Diffusion Dynamics for Brain-Inspired Computation.驯服用于脑启发计算的长时间离子漂移扩散动力学
Adv Mater. 2025 Jan;37(3):e2407326. doi: 10.1002/adma.202407326. Epub 2024 Nov 27.
8
All-Ferroelectric Spiking Neural Networks via Morphotropic Phase Boundary Neurons.通过准同型相界神经元实现的全铁电脉冲神经网络。
Adv Sci (Weinh). 2024 Nov;11(44):e2407870. doi: 10.1002/advs.202407870. Epub 2024 Oct 9.
9
Compact artificial neuron based on anti-ferroelectric transistor.基于反铁电晶体管的紧凑型人工神经元。
Nat Commun. 2022 Nov 17;13(1):7018. doi: 10.1038/s41467-022-34774-9.
10
Neural sampling machine with stochastic synapse allows brain-like learning and inference.具有随机突触的神经采样机可实现类似大脑的学习和推理。
Nat Commun. 2022 May 11;13(1):2571. doi: 10.1038/s41467-022-30305-8.
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.
4
Mimicking biological neurons with a nanoscale ferroelectric transistor.用纳米级铁电晶体管模拟生物神经元。
Nanoscale. 2018 Nov 29;10(46):21755-21763. doi: 10.1039/c8nr07135g.
5
On Practical Issues for Stochastic STDP Hardware With 1-bit Synaptic Weights.关于具有1位突触权重的随机STDP硬件的实际问题。
Front Neurosci. 2018 Oct 15;12:665. doi: 10.3389/fnins.2018.00665. eCollection 2018.
6
Accumulative Polarization Reversal in Nanoscale Ferroelectric Transistors.纳米尺度铁电晶体管中的累积极化反转。
ACS Appl Mater Interfaces. 2018 Jul 18;10(28):23997-24002. doi: 10.1021/acsami.8b08967. Epub 2018 Jul 6.
7
Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.用于训练高性能脉冲神经网络的时空反向传播
Front Neurosci. 2018 May 23;12:331. doi: 10.3389/fnins.2018.00331. eCollection 2018.
8
Equivalent-accuracy accelerated neural-network training using analogue memory.利用模拟内存实现等效精度的加速神经网络训练。
Nature. 2018 Jun;558(7708):60-67. doi: 10.1038/s41586-018-0180-5. Epub 2018 Jun 6.
9
An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator.一种基于现场可编程门阵列的大规模并行神经形态皮层模拟器。
Front Neurosci. 2018 Apr 10;12:213. doi: 10.3389/fnins.2018.00213. eCollection 2018.
10
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.超级脉冲:多层脉冲神经网络中的监督学习
Neural Comput. 2018 Jun;30(6):1514-1541. doi: 10.1162/neco_a_01086. Epub 2018 Apr 13.