Suppr超能文献

Spike-based time-domain analog weighted-sum calculation model for extremely low power VLSI implementation of multi-layer neural networks.

作者信息

Wang Quan, Tamukoh Hakaru, Morie Takashi

机构信息

Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Kitakyushu, Japan.

Research Center for Neuromorphic AI Hardware, Kyushu Institute of Technology, Kitakyushu, Japan.

出版信息

Front Neurosci. 2025 Sep 12;19:1656892. doi: 10.3389/fnins.2025.1656892. eCollection 2025.

Abstract

In deep neural network (DNN) models, the weighted summation, or multiply-and-accumulate (MAC) operation, is an essential and heavy calculation task, which leads to high power consumption in current digital processors. The use of analog operation in complementary metal-oxide-semiconductor (CMOS) very-large-scale integration (VLSI) circuits is a promising method for achieving extremely low power-consumption operation for such calculation tasks. In this paper, a time-domain analog weighted-sum calculation model is proposed based on an integrate-and-fire-type spiking neuron model. The proposed calculation model is applied to multi-layer feedforward networks, in which weighted summations with positive and negative weights are separately performed, and two timings proportional to the positive and negative ones are produced, respectively, in each layer. The timings are then fed into the next layers without their subtraction operation. We also propose VLSI circuits to implement the proposed model. Unlike conventional analog voltage or current mode circuits, the time-domain analog circuits use transient operation in charging/discharging processes to capacitors. Since the circuits can be designed without operational amplifiers, they can operate with extremely low power consumption. We designed a proof-of-concept (PoC) CMOS circuit to verify weighted-sum operation with the same weights. Simulation results showed that the precision was above 4-bit, and the energy efficiency for the weighted-sum calculation was 237.7 Tera Operations Per Second Per Watt (TOPS/W), more than one order of magnitude higher than that in state-of-the-art digital AI processors. Our model promises to be a suitable approach for performing intensive in-memory computing (IMC) of DNNs with moderate precision very energy-efficiently while reducing the cost of analog-digital-converter (ADC) overhead.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e47/12463932/d41466d284b7/fnins-19-1656892-g0001.jpg

本文引用的文献

4
A compute-in-memory chip based on resistive random-access memory.基于电阻式随机存取存储器的计算内存芯片。
Nature. 2022 Aug;608(7923):504-512. doi: 10.1038/s41586-022-04992-8. Epub 2022 Aug 17.
7
Memory devices and applications for in-memory computing.用于内存计算的存储设备和应用。
Nat Nanotechnol. 2020 Jul;15(7):529-544. doi: 10.1038/s41565-020-0655-z. Epub 2020 Mar 30.
8
Towards spike-based machine intelligence with neuromorphic computing.迈向基于尖峰的机器智能的神经形态计算。
Nature. 2019 Nov;575(7784):607-617. doi: 10.1038/s41586-019-1677-2. Epub 2019 Nov 27.
9
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验