• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

当使用正算数时,深度神经网络中激活函数的快速逼近。

Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic.

机构信息

Department of Information Engineering, Università di Pisa, Via Girolamo Caruso, 16, 56122 Pisa PI, Italy.

Medical Microinstruments (MMI) S.p.A., Via Sterpulino, 3, 56121 Pisa PI, Italy.

出版信息

Sensors (Basel). 2020 Mar 10;20(5):1515. doi: 10.3390/s20051515.

DOI:10.3390/s20051515
PMID:32164152
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7085555/
Abstract

With increasing real-time constraints being put on the use of Deep Neural Networks (DNNs) by real-time scenarios, there is the need to review information representation. A very challenging path is to employ an encoding that allows a fast processing and hardware-friendly representation of information. Among the proposed alternatives to the IEEE 754 standard regarding floating point representation of real numbers, the recently introduced Posit format has been theoretically proven to be really promising in satisfying the mentioned requirements. However, with the absence of proper hardware support for this novel type, this evaluation can be conducted only through a software emulation. While waiting for the widespread availability of the Posit Processing Units (the equivalent of the Floating Point Unit (FPU)), we can already exploit the Posit representation and the currently available Arithmetic-Logic Unit (ALU) to speed up DNNs by manipulating the low-level bit string representations of Posits. As a first step, in this paper, we present new arithmetic properties of the Posit number system with a focus on the configuration with 0 exponent bits. In particular, we propose a new class of Posit operators called L1 operators, which consists of fast and approximated versions of existing arithmetic operations or functions (e.g., hyperbolic tangent (TANH) and extended linear unit (ELU)) only using integer arithmetic. These operators introduce very interesting properties and results: (i) faster evaluation than the exact counterpart with a negligible accuracy degradation; (ii) an efficient ALU emulation of a number of Posits operations; and (iii) the possibility to vectorize operations in Posits, using existing ALU vectorized operations (such as the scalable vector extension of ARM CPUs or advanced vector extensions on Intel CPUs). As a second step, we test the proposed activation function on Posit-based DNNs, showing how 16-bit down to 10-bit Posits represent an exact replacement for 32-bit floats while 8-bit Posits could be an interesting alternative to 32-bit floats since their performances are a bit lower but their high speed and low storage properties are very appealing (leading to a lower bandwidth demand and more cache-friendly code). Finally, we point out how small Posits (i.e., up to 14 bits long) are very interesting while PPUs become widespread, since Posit operations can be tabulated in a very efficient way (see details in the text).

摘要

随着实时场景对深度神经网络 (DNN) 使用的实时约束不断增加,有必要审查信息表示。一个非常具有挑战性的方法是采用一种编码,允许快速处理和硬件友好的信息表示。在关于实数浮点表示的 IEEE 754 标准的替代方案中,最近提出的 Posit 格式在理论上被证明在满足上述要求方面非常有前途。然而,由于这种新型没有适当的硬件支持,这种评估只能通过软件仿真进行。在等待 Posit 处理单元(浮点处理单元 (FPU) 的等效物)广泛可用的同时,我们已经可以利用 Posit 表示和当前可用的算术逻辑单元 (ALU) 通过操作 Posit 的低级别位字符串表示来加速 DNN。作为第一步,在本文中,我们介绍了 Posit 数制的新算术特性,重点介绍了具有 0 指数位的配置。特别是,我们提出了一类新的 Posit 运算符,称为 L1 运算符,它仅使用整数算术由现有算术运算或函数(例如双曲正切 (TANH) 和扩展线性单元 (ELU))的快速和近似版本组成。这些运算符引入了非常有趣的特性和结果:(i) 比精确对应物更快的评估,精度降低可忽略不计;(ii) 使用现有 ALU 矢量化操作(例如 ARM CPU 的可扩展矢量扩展或 Intel CPU 的高级矢量扩展)对许多 Posit 操作进行高效的 ALU 仿真;(iii) 能够在 Posit 中对操作进行矢量化。作为第二步,我们在基于 Posit 的 DNN 上测试了所提出的激活函数,展示了 16 位到 10 位 Posit 如何精确替代 32 位浮点数,而 8 位 Posit 可能是 32 位浮点数的有趣替代品,因为它们的性能略低,但它们的高速和低存储特性非常吸引人(导致带宽需求降低和更适合缓存的代码)。最后,我们指出在 Posit 操作可以非常有效地制表(请参阅文本中的详细信息)时,小 Posit(即长度最长可达 14 位)非常有趣,而当 PPUs 变得广泛时更是如此。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/8453737b0778/sensors-20-01515-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/15c2cb276c3b/sensors-20-01515-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/00a84854879d/sensors-20-01515-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/7982613d57f7/sensors-20-01515-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/d1eabf1f0bf8/sensors-20-01515-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/aa453db4ef75/sensors-20-01515-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/155dc4b76b73/sensors-20-01515-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/b6702f6564e6/sensors-20-01515-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/1b243965ab30/sensors-20-01515-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/8453737b0778/sensors-20-01515-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/15c2cb276c3b/sensors-20-01515-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/00a84854879d/sensors-20-01515-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/7982613d57f7/sensors-20-01515-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/d1eabf1f0bf8/sensors-20-01515-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/aa453db4ef75/sensors-20-01515-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/155dc4b76b73/sensors-20-01515-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/b6702f6564e6/sensors-20-01515-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/1b243965ab30/sensors-20-01515-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3355/7085555/8453737b0778/sensors-20-01515-g009.jpg

相似文献

1
Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic.当使用正算数时,深度神经网络中激活函数的快速逼近。
Sensors (Basel). 2020 Mar 10;20(5):1515. doi: 10.3390/s20051515.
2
Expressive power of ReLU and step networks under floating-point operations.ReLU 和阶跃网络在浮点运算下的表达能力。
Neural Netw. 2024 Jul;175:106297. doi: 10.1016/j.neunet.2024.106297. Epub 2024 Apr 9.
3
Number Formats, Error Mitigation, and Scope for 16-Bit Arithmetics in Weather and Climate Modeling Analyzed With a Shallow Water Model.利用浅水模型分析天气和气候建模中16位算术的数字格式、误差缓解及范围
J Adv Model Earth Syst. 2020 Oct;12(10):e2020MS002246. doi: 10.1029/2020MS002246. Epub 2020 Oct 14.
4
Hybrid Precision Floating-Point (HPFP) Selection to Optimize Hardware-Constrained Accelerator for CNN Training.用于优化受硬件约束的CNN训练加速器的混合精度浮点(HPFP)选择
Sensors (Basel). 2024 Mar 27;24(7):2145. doi: 10.3390/s24072145.
5
Training high-performance and large-scale deep neural networks with full 8-bit integers.用全 8 位整数训练高性能和大规模深度神经网络。
Neural Netw. 2020 May;125:70-82. doi: 10.1016/j.neunet.2019.12.027. Epub 2020 Jan 15.
6
Exploring the Feasibility of a DNA Computer: Design of an ALU Using Sticker-Based DNA Model.探索DNA计算机的可行性:基于贴纸的DNA模型设计算术逻辑单元
IEEE Trans Nanobioscience. 2017 Sep;16(6):383-399. doi: 10.1109/TNB.2017.2726682. Epub 2017 Jul 13.
7
Partially pre-calculated weights for the backpropagation learning regime and high accuracy function mapping using continuous input RAM-based sigma-pi nets.用于基于连续输入随机存取存储器的sigma-pi网络的反向传播学习机制和高精度函数映射的部分预计算权重。
Neural Netw. 2000 Jan;13(1):91-110. doi: 10.1016/s0893-6080(99)00102-1.
8
Optimal Architecture of Floating-Point Arithmetic for Neural Network Training Processors.神经网络训练处理器浮点运算的最优架构。
Sensors (Basel). 2022 Feb 6;22(3):1230. doi: 10.3390/s22031230.
9
L1 -Norm Batch Normalization for Efficient Training of Deep Neural Networks.L1-范数批归一化在深度神经网络高效训练中的应用。
IEEE Trans Neural Netw Learn Syst. 2019 Jul;30(7):2043-2051. doi: 10.1109/TNNLS.2018.2876179. Epub 2018 Nov 9.
10
High-Performance Acceleration of 2-D and 3-D CNNs on FPGAs Using Static Block Floating Point.使用静态块浮点在现场可编程门阵列上对二维和三维卷积神经网络进行高性能加速。
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):4473-4487. doi: 10.1109/TNNLS.2021.3116302. Epub 2023 Aug 4.

引用本文的文献

1
Recent Trends on Applications of Electronics Pervading the Industry, Environment and Society.电子技术在工业、环境和社会中应用的最新趋势。
Sensors (Basel). 2020 Dec 18;20(24):7295. doi: 10.3390/s20247295.