• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

递减批量归一化

Diminishing Batch Normalization.

作者信息

Ma Yintai, Klabjan Diego

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 May;35(5):6544-6557. doi: 10.1109/TNNLS.2022.3210840. Epub 2024 May 2.

DOI:10.1109/TNNLS.2022.3210840
PMID:36260586
Abstract

In this article, we propose a generalization of the batch normalization (BN) algorithm, diminishing BN (DBN), where we update the BN parameters in a diminishing moving average way. BN is very effective in accelerating the convergence of a neural network training phase that it has become a common practice. Our proposed DBN algorithm retains the overall structure of the original BN algorithm while introducing a weighted averaging update to some trainable parameters. We provide an analysis of the convergence of the DBN algorithm that converges to a stationary point with respect to the trainable parameters. Our analysis can be easily generalized to the original BN algorithm by setting some parameters to constant. To the best of our knowledge, this analysis is the first of its kind for convergence with BN. We analyze a two-layer model with arbitrary activation functions. Common activation functions, such as ReLU and any smooth activation functions, meet our assumptions. In the numerical experiments, we test the proposed algorithm on complex modern CNN models with stochastic gradients (SGs) and ReLU activation on regression, classification, and image reconstruction tasks. We observe that DBN outperforms the original BN algorithm and benchmark layer normalization (LN) on the MNIST, NI, CIFAR-10, CIFAR-100, and Caltech-UCSD Birds-200-2011 datasets with modern complex CNN models such as Resnet-18 and typical FNN models.

摘要

在本文中,我们提出了批归一化(BN)算法的一种推广形式,即递减批归一化(DBN),其中我们以递减移动平均的方式更新BN参数。BN在加速神经网络训练阶段的收敛方面非常有效,以至于它已成为一种常见做法。我们提出的DBN算法保留了原始BN算法的整体结构,同时对一些可训练参数引入了加权平均更新。我们对DBN算法的收敛性进行了分析,该算法相对于可训练参数收敛到一个稳定点。通过将一些参数设置为常数,我们的分析可以很容易地推广到原始BN算法。据我们所知,这种分析是关于BN收敛性的首次此类分析。我们分析了具有任意激活函数的两层模型。常见的激活函数,如ReLU和任何平滑激活函数,都符合我们的假设。在数值实验中,我们在具有随机梯度(SGs)和ReLU激活的复杂现代卷积神经网络(CNN)模型上,对回归、分类和图像重建任务测试了所提出的算法。我们观察到,在MNIST、NI、CIFAR - 10、CIFAR - 100和加州理工学院 - 加州大学圣地亚哥分校鸟类 - 200 - 2011数据集上,使用诸如Resnet - 18等现代复杂CNN模型和典型的前馈神经网络(FNN)模型时,DBN优于原始BN算法和基准层归一化(LN)。

相似文献

1
Diminishing Batch Normalization.递减批量归一化
IEEE Trans Neural Netw Learn Syst. 2024 May;35(5):6544-6557. doi: 10.1109/TNNLS.2022.3210840. Epub 2024 May 2.
2
Re-Thinking the Effectiveness of Batch Normalization and Beyond.重新思考批归一化的有效性及其他
IEEE Trans Pattern Anal Mach Intell. 2024 Jan;46(1):465-478. doi: 10.1109/TPAMI.2023.3319005. Epub 2023 Dec 5.
3
Training Faster by Separating Modes of Variation in Batch-Normalized Models.通过分离批归一化模型中的变化模式实现更快训练。
IEEE Trans Pattern Anal Mach Intell. 2020 Jun;42(6):1483-1500. doi: 10.1109/TPAMI.2019.2895781. Epub 2019 Jan 28.
4
DBN Structure Design Algorithm for Different Datasets Based on Information Entropy and Reconstruction Error.基于信息熵和重构误差的不同数据集的深度信念网络结构设计算法
Entropy (Basel). 2018 Dec 4;20(12):927. doi: 10.3390/e20120927.
5
ResNet-Locust-BN Network-Based Automatic Identification of East Asian Migratory Locust Species and Instars from RGB Images.基于残差网络-蝗虫-批归一化网络的东亚飞蝗种类和龄期从RGB图像的自动识别
Insects. 2020 Jul 22;11(8):458. doi: 10.3390/insects11080458.
6
Revisiting Batch Normalization for Training Low-Latency Deep Spiking Neural Networks From Scratch.从头开始训练低延迟深度脉冲神经网络时重新审视批量归一化
Front Neurosci. 2021 Dec 9;15:773954. doi: 10.3389/fnins.2021.773954. eCollection 2021.
7
Convergence of the RMSProp deep learning method with penalty for nonconvex optimization.RMSProp 深度学习方法与非凸优化惩罚项的收敛性。
Neural Netw. 2021 Jul;139:17-23. doi: 10.1016/j.neunet.2021.02.011. Epub 2021 Feb 23.
8
Improving Network Training on Resource-Constrained Devices via Habituation Normalization.通过习惯化归一化提高资源受限设备上的网络训练。
Sensors (Basel). 2022 Dec 16;22(24):9940. doi: 10.3390/s22249940.
9
Why Batch Normalization Damage Federated Learning on Non-IID Data?为什么批量归一化会损害非独立同分布数据上的联邦学习?
IEEE Trans Neural Netw Learn Syst. 2023 Nov 1;PP. doi: 10.1109/TNNLS.2023.3323302.
10
Dynamic Bayesian network structure learning based on an improved bacterial foraging optimization algorithm.基于改进细菌觅食优化算法的动态贝叶斯网络结构学习。
Sci Rep. 2024 Apr 9;14(1):8266. doi: 10.1038/s41598-024-58806-0.

引用本文的文献

1
PSFHSP-Net: an efficient lightweight network for identifying pubic symphysis-fetal head standard plane from intrapartum ultrasound images.PSFHSP-Net:一种用于识别产时超声图像中耻骨联合-胎儿头标准平面的高效轻量级网络。
Med Biol Eng Comput. 2024 Oct;62(10):2975-2986. doi: 10.1007/s11517-024-03111-1. Epub 2024 May 9.