• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于 Wirtinger 演算框架的全复共轭梯度神经网络:确定性收敛及其应用。

Fully complex conjugate gradient-based neural networks using Wirtinger calculus framework: Deterministic convergence and its application.

机构信息

College of Science, China University of Petroleum, Qingdao, 266580, China.

School of Mathematics, Southeast University, Nanjing, 211189, China.

出版信息

Neural Netw. 2019 Jul;115:50-64. doi: 10.1016/j.neunet.2019.02.011. Epub 2019 Mar 26.

DOI:10.1016/j.neunet.2019.02.011
PMID:30974301
Abstract

Conjugate gradient method has been verified to be one effective strategy for training neural networks due to its low memory requirements and fast convergence. In this paper, we propose an efficient conjugate gradient method to train fully complex-valued network models in terms of Wirtinger differential operator. Two ways are adopted to enhance the training performance. One is to construct a sufficient descent direction during training by designing a fine tuning conjugate coefficient. Another technique is to pursue the optimal learning rate instead of a fixed constant in each iteration which is determined by employing a generalized Armijo search. In addition, we rigorously prove its weak and strong convergence results, i.e., the gradient norms of objective function with respect to weights approach zero along with the increasing iterations and the weight sequence tends to the optimal point. To verify the effectiveness and rationality of the proposed method, four illustrated simulations have been performed on both typical regression and classification problems.

摘要

共轭梯度法由于其低内存需求和快速收敛的特点,已被验证为训练神经网络的一种有效策略。在本文中,我们提出了一种基于 Wirtinger 微分算子的高效共轭梯度法来训练全复数值网络模型。我们采用了两种方法来提高训练性能。一种是通过设计精细调整共轭系数来在训练过程中构建充分下降方向。另一种技术是在每次迭代中追求最优学习率,而不是固定常数,这是通过采用广义 Armijo 搜索来确定的。此外,我们还严格证明了其弱和强收敛性,即目标函数相对于权重的梯度范数随着迭代次数的增加而趋近于零,并且权重序列趋于最优解。为了验证所提出方法的有效性和合理性,我们在典型的回归和分类问题上进行了四个仿真实验。

相似文献

1
Fully complex conjugate gradient-based neural networks using Wirtinger calculus framework: Deterministic convergence and its application.基于 Wirtinger 演算框架的全复共轭梯度神经网络:确定性收敛及其应用。
Neural Netw. 2019 Jul;115:50-64. doi: 10.1016/j.neunet.2019.02.011. Epub 2019 Mar 26.
2
Convergence analysis of an augmented algorithm for fully complex-valued neural networks.完全复值神经网络增广算法的收敛性分析。
Neural Netw. 2015 Sep;69:44-50. doi: 10.1016/j.neunet.2015.05.003. Epub 2015 May 27.
3
Convergence analysis of fully complex backpropagation algorithm based on Wirtinger calculus.基于 Wirtinger 微积分的全复型反向传播算法的收敛性分析。
Cogn Neurodyn. 2014 Jun;8(3):261-6. doi: 10.1007/s11571-013-9276-7. Epub 2014 Jan 3.
4
Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks.三类分裂复梯度算法在复值递归神经网络中的收敛性分析。
Neural Comput. 2010 Oct;22(10):2655-77. doi: 10.1162/NECO_a_00021.
5
Logarithmic learning for generalized classifier neural network.广义分类器神经网络的对数学习
Neural Netw. 2014 Dec;60:133-40. doi: 10.1016/j.neunet.2014.08.004. Epub 2014 Aug 19.
6
Fractional-order gradient descent learning of BP neural networks with Caputo derivative.基于卡普托导数的BP神经网络分数阶梯度下降学习
Neural Netw. 2017 May;89:19-30. doi: 10.1016/j.neunet.2017.02.007. Epub 2017 Feb 22.
7
Hyperbolic Gradient Operator and Hyperbolic Back-Propagation Learning Algorithms.双曲型梯度算子和双曲型反向传播学习算法。
IEEE Trans Neural Netw Learn Syst. 2018 May;29(5):1689-1702. doi: 10.1109/TNNLS.2017.2677446. Epub 2017 Mar 23.
8
Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks.基于混沌注入的梯度法训练前馈神经网络的确定性收敛
Cogn Neurodyn. 2015 Jun;9(3):331-40. doi: 10.1007/s11571-014-9323-z. Epub 2015 Jan 1.
9
Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks.前馈神经网络中带动量的循环和近似循环学习的收敛性
IEEE Trans Neural Netw. 2011 Aug;22(8):1297-306. doi: 10.1109/TNN.2011.2159992.
10
Adaptive complex-valued stepsize based fast learning of complex-valued neural networks.基于自适应复数步长的复数神经网络快速学习。
Neural Netw. 2020 Apr;124:233-242. doi: 10.1016/j.neunet.2020.01.011. Epub 2020 Jan 25.