• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

幂函数误差初始化可改善神经网络分类中反向传播学习的收敛性。

Power Function Error Initialization Can Improve Convergence of Backpropagation Learning in Neural Networks for Classification.

机构信息

Albstadt-Sigmaringen University, Albstadt 72458, Germany

出版信息

Neural Comput. 2021 Jul 26;33(8):2193-2225. doi: 10.1162/neco_a_01407.

DOI:10.1162/neco_a_01407
PMID:34310673
Abstract

Supervised learning corresponds to minimizing a loss or cost function expressing the differences between model predictions yn and the target values tn given by the training data. In neural networks, this means backpropagating error signals through the transposed weight matrixes from the output layer toward the input layer. For this, error signals in the output layer are typically initialized by the difference yn- tn, which is optimal for several commonly used loss functions like cross-entropy or sum of squared errors. Here I evaluate a more general error initialization method using power functions |yn- tn|q for q>0, corresponding to a new family of loss functions that generalize cross-entropy. Surprisingly, experiments on various learning tasks reveal that a proper choice of q can significantly improve the speed and convergence of backpropagation learning, in particular in deep and recurrent neural networks. The results suggest two main reasons for the observed improvements. First, compared to cross-entropy, the new loss functions provide better fits to the distribution of error signals in the output layer and therefore maximize the model's likelihood more efficiently. Second, the new error initialization procedure may often provide a better gradient-to-loss ratio over a broad range of neural output activity, thereby avoiding flat loss landscapes with vanishing gradients.

摘要

监督学习对应于最小化损失或成本函数,该函数表示模型预测 yn 和训练数据给出的目标值 tn 之间的差异。在神经网络中,这意味着通过从输出层到输入层的权重矩阵的逆传播误差信号。为此,输出层中的误差信号通常通过 yn-tn 的差异初始化,对于几种常用的损失函数(例如交叉熵或均方误差),这是最优的。在这里,我使用 q>0 的幂函数 |yn-tn|q 评估一种更通用的误差初始化方法,这对应于一种广义交叉熵的新损失函数家族。令人惊讶的是,在各种学习任务上的实验表明,q 的适当选择可以显著提高反向传播学习的速度和收敛性,特别是在深度和递归神经网络中。结果表明,观察到的改进有两个主要原因。首先,与交叉熵相比,新的损失函数更有效地拟合输出层中误差信号的分布,从而更有效地最大化模型的似然。其次,新的误差初始化过程通常可以在广泛的神经输出活动范围内提供更好的梯度与损失的比例,从而避免梯度消失的平坦损失景观。

相似文献

1
Power Function Error Initialization Can Improve Convergence of Backpropagation Learning in Neural Networks for Classification.幂函数误差初始化可改善神经网络分类中反向传播学习的收敛性。
Neural Comput. 2021 Jul 26;33(8):2193-2225. doi: 10.1162/neco_a_01407.
2
Hebbian Descent: A Unified View on Log-Likelihood Learning.赫布梯度下降:对数似然学习的统一观点。
Neural Comput. 2024 Aug 19;36(9):1669-1712. doi: 10.1162/neco_a_01684.
3
Successfully and efficiently training deep multi-layer perceptrons with logistic activation function simply requires initializing the weights with an appropriate negative mean.成功高效地训练具有逻辑激活函数的深层多层感知机,只需要用适当的负均值初始化权重。
Neural Netw. 2022 Sep;153:87-103. doi: 10.1016/j.neunet.2022.05.030. Epub 2022 Jun 7.
4
Learning in the machine: Recirculation is random backpropagation.机器中的学习:再循环是随机反向传播。
Neural Netw. 2018 Dec;108:479-494. doi: 10.1016/j.neunet.2018.09.006. Epub 2018 Sep 27.
5
Analyzing and Accelerating the Bottlenecks of Training Deep SNNs With Backpropagation.分析和加速基于反向传播的深度 SNN 训练的瓶颈。
Neural Comput. 2020 Dec;32(12):2557-2600. doi: 10.1162/neco_a_01319. Epub 2020 Sep 18.
6
Noise can speed backpropagation learning and deep bidirectional pretraining.噪声可以加速反向传播学习和深度双向预训练。
Neural Netw. 2020 Sep;129:359-384. doi: 10.1016/j.neunet.2020.04.004. Epub 2020 Apr 11.
7
Novel maximum-margin training algorithms for supervised neural networks.用于监督神经网络的新型最大间隔训练算法。
IEEE Trans Neural Netw. 2010 Jun;21(6):972-84. doi: 10.1109/TNN.2010.2046423. Epub 2010 Apr 19.
8
A mathematical framework for improved weight initialization of neural networks using Lagrange multipliers.一种使用拉格朗日乘数改进神经网络权重初始化的数学框架。
Neural Netw. 2023 Sep;166:579-594. doi: 10.1016/j.neunet.2023.07.035. Epub 2023 Aug 3.
9
A theory of local learning, the learning channel, and the optimality of backpropagation.一种关于局部学习、学习通道及反向传播最优性的理论。
Neural Netw. 2016 Nov;83:51-74. doi: 10.1016/j.neunet.2016.07.006. Epub 2016 Aug 5.
10
Three learning phases for radial-basis-function networks.径向基函数网络的三个学习阶段。
Neural Netw. 2001 May;14(4-5):439-58. doi: 10.1016/s0893-6080(01)00027-2.