• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用非线性共轭梯度风格的自适应动量改进深度神经网络的图像分类训练

Improving Deep Neural Networks' Training for Image Classification With Nonlinear Conjugate Gradient-Style Adaptive Momentum.

作者信息

Wang Bao, Ye Qiang

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):12288-12300. doi: 10.1109/TNNLS.2023.3255783. Epub 2024 Sep 3.

DOI:10.1109/TNNLS.2023.3255783
PMID:37030680
Abstract

Momentum is crucial in stochastic gradient-based optimization algorithms for accelerating or improving training deep neural networks (DNNs). In deep learning practice, the momentum is usually weighted by a well-calibrated constant. However, tuning the hyperparameter for momentum can be a significant computational burden. In this article, we propose a novel adaptive momentum for improving DNNs training; this adaptive momentum, with no momentum-related hyperparameter required, is motivated by the nonlinear conjugate gradient (NCG) method. Stochastic gradient descent (SGD) with this new adaptive momentum eliminates the need for the momentum hyperparameter calibration, allows using a significantly larger learning rate, accelerates DNN training, and improves the final accuracy and robustness of the trained DNNs. For instance, SGD with this adaptive momentum reduces classification errors for training ResNet110 for CIFAR10 and CIFAR100 from 5.25% to 4.64% and 23.75% to 20.03%, respectively. Furthermore, SGD, with the new adaptive momentum, also benefits adversarial training and, hence, improves the adversarial robustness of the trained DNNs.

摘要

动量在基于随机梯度的优化算法中对于加速或改进深度神经网络(DNN)的训练至关重要。在深度学习实践中,动量通常由一个校准良好的常数加权。然而,调整动量的超参数可能是一项重大的计算负担。在本文中,我们提出了一种用于改进DNN训练的新型自适应动量;这种自适应动量无需与动量相关的超参数,其灵感来自非线性共轭梯度(NCG)方法。具有这种新自适应动量的随机梯度下降(SGD)消除了对动量超参数校准的需求,允许使用显著更大的学习率,加速DNN训练,并提高训练后DNN的最终准确性和鲁棒性。例如,具有这种自适应动量的SGD将用于CIFAR10和CIFAR100训练ResNet110的分类错误分别从5.25%降至4.64%和从23.75%降至20.03%。此外,具有新自适应动量的SGD对对抗训练也有益,因此提高了训练后DNN的对抗鲁棒性。

相似文献

1
Improving Deep Neural Networks' Training for Image Classification With Nonlinear Conjugate Gradient-Style Adaptive Momentum.使用非线性共轭梯度风格的自适应动量改进深度神经网络的图像分类训练
IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):12288-12300. doi: 10.1109/TNNLS.2023.3255783. Epub 2024 Sep 3.
2
A novel adaptive momentum method for medical image classification using convolutional neural network.基于卷积神经网络的医学图像分类自适应动量方法
BMC Med Imaging. 2022 Mar 1;22(1):34. doi: 10.1186/s12880-022-00755-z.
3
PID Controller-Based Stochastic Optimization Acceleration for Deep Neural Networks.基于 PID 控制器的深度神经网络随机优化加速。
IEEE Trans Neural Netw Learn Syst. 2020 Dec;31(12):5079-5091. doi: 10.1109/TNNLS.2019.2963066. Epub 2020 Nov 30.
4
Accelerating DNN Training Through Selective Localized Learning.通过选择性局部学习加速深度神经网络训练
Front Neurosci. 2022 Jan 11;15:759807. doi: 10.3389/fnins.2021.759807. eCollection 2021.
5
Improving Adversarial Robustness of Deep Neural Networks via Adaptive Margin Evolution.通过自适应边际进化提高深度神经网络的对抗鲁棒性
Neurocomputing (Amst). 2023 Sep 28;551. doi: 10.1016/j.neucom.2023.126524. Epub 2023 Jul 7.
6
Selecting the best optimizers for deep learning-based medical image segmentation.为基于深度学习的医学图像分割选择最佳优化器。
Front Radiol. 2023 Sep 21;3:1175473. doi: 10.3389/fradi.2023.1175473. eCollection 2023.
7
Increasing-Margin Adversarial (IMA) training to improve adversarial robustness of neural networks.基于增加间隔的对抗(IMA)训练来提高神经网络的对抗鲁棒性。
Comput Methods Programs Biomed. 2023 Oct;240:107687. doi: 10.1016/j.cmpb.2023.107687. Epub 2023 Jun 24.
8
A regularization method to improve adversarial robustness of neural networks for ECG signal classification.一种提高神经网络对抗鲁棒性的正则化方法,用于 ECG 信号分类。
Comput Biol Med. 2022 May;144:105345. doi: 10.1016/j.compbiomed.2022.105345. Epub 2022 Feb 24.
9
Online Learning for DNN Training: A Stochastic Block Adaptive Gradient Algorithm.在线深度学习:一种随机块自适应梯度算法。
Comput Intell Neurosci. 2022 Jun 2;2022:9337209. doi: 10.1155/2022/9337209. eCollection 2022.
10
A Novel Learning Algorithm to Optimize Deep Neural Networks: Evolved Gradient Direction Optimizer (EVGO).一种优化深度神经网络的新型学习算法:进化梯度方向优化器(EVGO)。
IEEE Trans Neural Netw Learn Syst. 2021 Feb;32(2):685-694. doi: 10.1109/TNNLS.2020.2979121. Epub 2021 Feb 4.