• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

CHARLES:用于光子感知神经网络的 C++定点库。

CHARLES: A C++ fixed-point library for Photonic-Aware Neural Networks.

机构信息

Scuola Superiore Sant'Anna, Pisa, 56124, Italy; National Research Council of Italy - Institute of Electronics, Information Engineering and Telecommunications (CNR-IEIIT), Pisa, 56122, Italy; Sma-RTy Italia Srl, Carugate, 20061, Italy.

Scuola Superiore Sant'Anna, Pisa, 56124, Italy.

出版信息

Neural Netw. 2023 May;162:531-540. doi: 10.1016/j.neunet.2023.03.007. Epub 2023 Mar 21.

DOI:10.1016/j.neunet.2023.03.007
PMID:36990002
Abstract

In this paper we present CHARLES (C++ pHotonic Aware neuRaL nEtworkS), a C++ library aimed at providing a flexible tool to simulate the behavior of Photonic-Aware Neural Network (PANN). PANNs are neural network architectures aware of the constraints due to the underlying photonic hardware, mostly in terms of low equivalent precision of the computations. For this reason, CHARLES exploits fixed-point computations for inference, while it supports both floating-point and fixed-point numerical formats for training. In this way, we can compare the effects due to the quantization in the inference phase when the training phase is performed on a classical floating-point model and on a model exploiting high-precision fixed-point numbers. To validate CHARLES and identify the most suited numerical format for PANN training, we report the simulation results obtained considering three datasets: Iris, MNIST, and Fashion-MNIST. Fixed-training is shown to outperform floating-training when executing inference on bitwidths suitable for photonic implementation. Indeed, performing the training phase in the floating-point domain and then quantizing to lower bitwidths results in a very high accuracy loss. Instead, when fixed-point numbers are exploited in the training phase, the accuracy loss due to quantization to lower bitwidths is significantly reduced. In particular, we show that for Iris dataset, fixed-training achieves a performance similar to floating-training. Fixed-training allows to obtain an accuracy of 90.4% and 68.1% with the MNIST and Fashion-MNIST datasets using only 6 bits, while the floating-training reaches an accuracy of just 25.4% and 50.0% when exploiting the same bitwidths.

摘要

在本文中,我们提出了 CHARLES(C++ 光子感知神经网络),这是一个 C++ 库,旨在提供一个灵活的工具来模拟光子感知神经网络(PANN)的行为。PANN 是一种感知底层光子硬件约束的神经网络架构,主要体现在计算的等效精度较低。出于这个原因,CHARLES 在推理中利用定点计算,同时支持浮点和定点数值格式进行训练。通过这种方式,我们可以比较在训练阶段使用经典浮点模型和利用高精度定点数的模型进行训练时,推理阶段量化的影响。为了验证 CHARLES 并确定最适合 PANN 训练的数值格式,我们报告了考虑三个数据集(Iris、MNIST 和 Fashion-MNIST)时获得的模拟结果。当在适合光子实现的位宽上执行推理时,固定训练被证明优于浮点训练。实际上,在浮点域中执行训练阶段,然后量化到较低的位宽会导致非常高的精度损失。相反,当在训练阶段利用定点数时,量化到较低位宽的精度损失会显著降低。特别是,我们表明,对于 Iris 数据集,固定训练可以实现与浮点训练相似的性能。固定训练可以在 MNIST 和 Fashion-MNIST 数据集上使用仅 6 位获得 90.4%和 68.1%的准确率,而在利用相同位宽时,浮点训练仅达到 25.4%和 50.0%的准确率。

相似文献

1
CHARLES: A C++ fixed-point library for Photonic-Aware Neural Networks.CHARLES:用于光子感知神经网络的 C++定点库。
Neural Netw. 2023 May;162:531-540. doi: 10.1016/j.neunet.2023.03.007. Epub 2023 Mar 21.
2
Quantization-aware training for low precision photonic neural networks.低精度光神经网络的量化感知训练。
Neural Netw. 2022 Nov;155:561-573. doi: 10.1016/j.neunet.2022.09.015. Epub 2022 Sep 19.
3
Optimal Architecture of Floating-Point Arithmetic for Neural Network Training Processors.神经网络训练处理器浮点运算的最优架构。
Sensors (Basel). 2022 Feb 6;22(3):1230. doi: 10.3390/s22031230.
4
Channel response-aware photonic neural network accelerators for high-speed inference through bandwidth-limited optics.通过带宽受限的光学实现高速推理的信道响应感知光神经网路加速器。
Opt Express. 2022 Mar 28;30(7):10664-10671. doi: 10.1364/OE.452803.
5
Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI.具有高吞吐量 FPGA 实现的量化感知神经网络层,用于边缘人工智能。
Sensors (Basel). 2023 May 11;23(10):4667. doi: 10.3390/s23104667.
6
Mixed-precision weights network for field-programmable gate array.基于现场可编程门阵列的混合精度权重网络。
PLoS One. 2021 May 10;16(5):e0251329. doi: 10.1371/journal.pone.0251329. eCollection 2021.
7
Hybrid Precision Floating-Point (HPFP) Selection to Optimize Hardware-Constrained Accelerator for CNN Training.用于优化受硬件约束的CNN训练加速器的混合精度浮点(HPFP)选择
Sensors (Basel). 2024 Mar 27;24(7):2145. doi: 10.3390/s24072145.
8
A Post-training Quantization Method for the Design of Fixed-Point-Based FPGA/ASIC Hardware Accelerators for LSTM/GRU Algorithms.一种针对 LSTM/GRU 算法的基于定点的 FPGA/ASIC 硬件加速器设计的后训练量化方法。
Comput Intell Neurosci. 2022 May 11;2022:9485933. doi: 10.1155/2022/9485933. eCollection 2022.
9
Expressive power of ReLU and step networks under floating-point operations.ReLU 和阶跃网络在浮点运算下的表达能力。
Neural Netw. 2024 Jul;175:106297. doi: 10.1016/j.neunet.2024.106297. Epub 2024 Apr 9.
10
Design of Hardware Accelerators for Optimized and Quantized Neural Networks to Detect Atrial Fibrillation in Patch ECG Device with RISC-V.基于 RISC-V 的贴片式心电图设备中用于检测心房颤动的优化与量化神经网络硬件加速器设计。
Sensors (Basel). 2023 Mar 1;23(5):2703. doi: 10.3390/s23052703.