• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于直接错误驱动学习的深度神经网络及其在大数据中的应用。

Direct Error-Driven Learning for Deep Neural Networks With Applications to Big Data.

出版信息

IEEE Trans Neural Netw Learn Syst. 2020 May;31(5):1763-1770. doi: 10.1109/TNNLS.2019.2920964. Epub 2019 Jul 15.

DOI:10.1109/TNNLS.2019.2920964
PMID:31329564
Abstract

In this brief, heterogeneity and noise in big data are shown to increase the generalization error for a traditional learning regime utilized for deep neural networks (deep NNs). To reduce this error, while overcoming the issue of vanishing gradients, a direct error-driven learning (EDL) scheme is proposed. First, to reduce the impact of heterogeneity and data noise, the concept of a neighborhood is introduced. Using this neighborhood, an approximation of generalization error is obtained and an overall error, comprised of learning and the approximate generalization errors, is defined. A novel NN weight-tuning law is obtained through a layer-wise performance measure enabling the direct use of overall error for learning. Additional constraints are introduced into the layer-wise performance measure to guide and improve the learning process in the presence of noisy dimensions. The proposed direct EDL scheme effectively addresses the issue of heterogeneity and noise while mitigating vanishing gradients and noisy dimensions. A comprehensive simulation study is presented where the proposed approach is shown to mitigate the vanishing gradient problem while improving generalization by 6%.

摘要

在本研究中,我们发现大数据中的异质性和噪声会增加传统学习模式下深度神经网络(DNN)的泛化误差。为了降低这种误差,同时克服梯度消失的问题,我们提出了一种直接的误差驱动学习(EDL)方案。首先,为了降低异质性和数据噪声的影响,引入了邻域的概念。利用这个邻域,我们得到了泛化误差的近似值,并定义了一个包含学习和近似泛化误差的总误差。通过一种基于层的性能度量方法,得到了一个新的神经网络权重调整法则,使得总误差可以直接用于学习。在层的性能度量中引入了附加的约束条件,以在存在噪声维度的情况下指导和改进学习过程。所提出的直接 EDL 方案有效地解决了异质性和噪声问题,同时缓解了梯度消失和噪声维度的问题。我们进行了全面的仿真研究,结果表明,该方法不仅可以缓解梯度消失问题,还可以将泛化误差提高 6%。

相似文献

1
Direct Error-Driven Learning for Deep Neural Networks With Applications to Big Data.基于直接错误驱动学习的深度神经网络及其在大数据中的应用。
IEEE Trans Neural Netw Learn Syst. 2020 May;31(5):1763-1770. doi: 10.1109/TNNLS.2019.2920964. Epub 2019 Jul 15.
2
Cooperative Deep Q-Learning Framework for Environments Providing Image Feedback.用于提供图像反馈环境的协作深度Q学习框架
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):9267-9276. doi: 10.1109/TNNLS.2022.3232069. Epub 2024 Jul 8.
3
Analysis of Diffractive Optical Neural Networks and Their Integration with Electronic Neural Networks.衍射光学神经网络及其与电子神经网络集成的分析
IEEE J Sel Top Quantum Electron. 2020 Jan-Feb;26(1). doi: 10.1109/JSTQE.2019.2921376. Epub 2019 Jun 6.
4
Neural networks for advanced control of robot manipulators.用于机器人操纵器高级控制的神经网络。
IEEE Trans Neural Netw. 2002;13(2):343-54. doi: 10.1109/72.991420.
5
High-dimensional dynamics of generalization error in neural networks.神经网络泛化误差的高维动力学。
Neural Netw. 2020 Dec;132:428-446. doi: 10.1016/j.neunet.2020.08.022. Epub 2020 Sep 5.
6
Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness.从数据分布和神经网络平滑度的角度量化深度学习中的泛化误差。
Neural Netw. 2020 Oct;130:85-99. doi: 10.1016/j.neunet.2020.06.024. Epub 2020 Jul 3.
7
Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning.深入挖掘,更好地泛化:深度学习的信息论视角
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16683-16695. doi: 10.1109/TNNLS.2023.3297113. Epub 2024 Oct 29.
8
Residual DNN: training diffractive deep neural networks via learnable light shortcuts.残留 DNN:通过可学习的光捷径训练衍射深度神经网络。
Opt Lett. 2020 May 15;45(10):2688-2691. doi: 10.1364/OL.389696.
9
Enabling deeper learning on big data for materials informatics applications.实现大数据环境下材料信息学应用的深度学习。
Sci Rep. 2021 Feb 19;11(1):4244. doi: 10.1038/s41598-021-83193-1.
10
An analysis of training and generalization errors in shallow and deep networks.浅析浅层网络和深层网络中的训练误差与泛化误差。
Neural Netw. 2020 Jan;121:229-241. doi: 10.1016/j.neunet.2019.08.028. Epub 2019 Sep 7.

引用本文的文献

1
Predictive Maintenance for Pump Systems and Thermal Power Plants: State-of-the-Art Review, Trends and Challenges.泵系统和火力发电厂的预知性维护:现状综述、趋势和挑战。
Sensors (Basel). 2020 Apr 24;20(8):2425. doi: 10.3390/s20082425.