• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

理解和减轻训练有素的深度神经网络中的噪声。

Understanding and mitigating noise in trained deep neural networks.

机构信息

Département d'Optique P. M. Duffieux, Institut FEMTO-ST, Université Bourgogne-Franche-Comté CNRS UMR 6174, Besançon, France; Institute of Physics, Saratov State University, 83 Astrakhanskaya str., 410012 Saratov, Russia.

Département d'Optique P. M. Duffieux, Institut FEMTO-ST, Université Bourgogne-Franche-Comté CNRS UMR 6174, Besançon, France.

出版信息

Neural Netw. 2022 Feb;146:151-160. doi: 10.1016/j.neunet.2021.11.008. Epub 2021 Nov 13.

DOI:10.1016/j.neunet.2021.11.008
PMID:34864223
Abstract

Deep neural networks unlocked a vast range of new applications by solving tasks of which many were previously deemed as reserved to higher human intelligence. One of the developments enabling this success was a boost in computing power provided by special purpose hardware, such as graphic or tensor processing units. However, these do not leverage fundamental features of neural networks like parallelism and analog state variables. Instead, they emulate neural networks relying on binary computing, which results in unsustainable energy consumption and comparatively low speed. Fully parallel and analogue hardware promises to overcome these challenges, yet the impact of analogue neuron noise and its propagation, i.e. accumulation, threatens rendering such approaches inept. Here, we determine for the first time the propagation of noise in deep neural networks comprising noisy nonlinear neurons in trained fully connected layers. We study additive and multiplicative as well as correlated and uncorrelated noise, and develop analytical methods that predict the noise level in any layer of symmetric deep neural networks or deep neural networks trained with back propagation. We find that noise accumulation is generally bound, and adding additional network layers does not worsen the signal to noise ratio beyond a limit. Most importantly, noise accumulation can be suppressed entirely when neuron activation functions have a slope smaller than unity. We therefore developed the framework for noise in fully connected deep neural networks implemented in analog systems, and identify criteria allowing engineers to design noise-resilient novel neural network hardware.

摘要

深度神经网络通过解决许多以前被认为是人类更高智能所保留的任务,解锁了广泛的新应用。促成这一成功的发展之一是专用硬件(如图形或张量处理单元)提供的计算能力的提高。然而,这些硬件并没有利用神经网络的基本特征,如并行性和模拟状态变量。相反,它们依赖于二进制计算来模拟神经网络,这导致了不可持续的能源消耗和相对较低的速度。完全并行和模拟硬件有望克服这些挑战,但模拟神经元噪声及其传播(即积累)的影响,威胁到使这些方法变得无效。在这里,我们首次确定了在包含经过训练的全连接层中噪声非线性神经元的深度神经网络中噪声的传播。我们研究了加性和乘性噪声,以及相关和不相关噪声,并开发了分析方法,可以预测具有对称深度神经网络或使用反向传播训练的深度神经网络中任何层的噪声水平。我们发现噪声积累通常是有界的,并且在超过一定限制后,添加更多的网络层不会使信噪比恶化。最重要的是,当神经元激活函数的斜率小于 1 时,可以完全抑制噪声积累。因此,我们开发了在模拟系统中实现的全连接深度神经网络中的噪声框架,并确定了允许工程师设计抗噪声新型神经网络硬件的标准。

相似文献

1
Understanding and mitigating noise in trained deep neural networks.理解和减轻训练有素的深度神经网络中的噪声。
Neural Netw. 2022 Feb;146:151-160. doi: 10.1016/j.neunet.2021.11.008. Epub 2021 Nov 13.
2
Fundamental aspects of noise in analog-hardware neural networks.模拟硬件神经网络中的噪声基本方面。
Chaos. 2019 Oct;29(10):103128. doi: 10.1063/1.5120824.
3
Noise-mitigation strategies in physical feedforward neural networks.物理前馈神经网络中的噪声缓解策略。
Chaos. 2022 Jun;32(6):061106. doi: 10.1063/5.0096637.
4
Impact of white noise in artificial neural networks trained for classification: Performance and noise mitigation strategies.白噪声对用于分类训练的人工神经网络的影响:性能及噪声缓解策略
Chaos. 2024 May 1;34(5). doi: 10.1063/5.0206807.
5
Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence.硬件尖峰神经元在嵌入式人工智能中的设计空间探索。
Neural Netw. 2020 Jan;121:366-386. doi: 10.1016/j.neunet.2019.09.024. Epub 2019 Sep 26.
6
Neuromorphic computing hardware and neural architectures for robotics.机器人的神经形态计算硬件和神经架构。
Sci Robot. 2022 Jun 29;7(67):eabl8419. doi: 10.1126/scirobotics.abl8419.
7
On energy complexity of fully-connected layers.关于全连接层的能量复杂度
Neural Netw. 2024 Oct;178:106419. doi: 10.1016/j.neunet.2024.106419. Epub 2024 May 31.
8
Enabling Training of Neural Networks on Noisy Hardware.在有噪声的硬件上实现神经网络训练。
Front Artif Intell. 2021 Sep 9;4:699148. doi: 10.3389/frai.2021.699148. eCollection 2021.
9
Robust and energy-efficient expression recognition based on improved deep ResNets.基于改进深度残差网络的鲁棒且节能的表情识别
Biomed Tech (Berl). 2019 Sep 25;64(5):519-528. doi: 10.1515/bmt-2018-0027.
10
A Practical Approach to the Analysis and Optimization of Neural Networks on Embedded Systems.嵌入式系统中神经网络的分析与优化实用方法。
Sensors (Basel). 2022 Oct 14;22(20):7807. doi: 10.3390/s22207807.

引用本文的文献

1
Quantum-limited stochastic optical neural networks operating at a few quanta per activation.每次激活以几个量子运行的量子极限随机光学神经网络。
Nat Commun. 2025 Jan 3;16(1):359. doi: 10.1038/s41467-024-55220-y.
2
Robust neural networks using stochastic resonance neurons.使用随机共振神经元的鲁棒神经网络。
Commun Eng. 2024 Nov 13;3(1):169. doi: 10.1038/s44172-024-00314-0.
3
Quantum-noise-limited optical neural networks operating at a few quanta per activation.每次激活以几个量子运行的量子噪声限制光学神经网络。
Res Sq. 2023 Oct 26:rs.3.rs-3318262. doi: 10.21203/rs.3.rs-3318262/v1.
4
All-analog photoelectronic chip for high-speed vision tasks.用于高速视觉任务的全模拟光电芯片。
Nature. 2023 Nov;623(7985):48-57. doi: 10.1038/s41586-023-06558-8. Epub 2023 Oct 25.
5
Noise-resilient and high-speed deep learning with coherent silicon photonics.基于相干硅光子学的抗噪声高速深度学习。
Nat Commun. 2022 Sep 23;13(1):5572. doi: 10.1038/s41467-022-33259-z.
6
An optical neural network using less than 1 photon per multiplication.一种使用每个乘法运算不到 1 个光子的光神经网络。
Nat Commun. 2022 Jan 10;13(1):123. doi: 10.1038/s41467-021-27774-8.