• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于复值神经网络的自适应CL-BFGS算法

Adaptive CL-BFGS Algorithms for Complex-Valued Neural Networks.

作者信息

Zhang Yongliang, Huang He, Shen Gangxiang

出版信息

IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):6313-6327. doi: 10.1109/TNNLS.2021.3135553. Epub 2023 Sep 1.

DOI:10.1109/TNNLS.2021.3135553
PMID:34995196
Abstract

Complex-valued limited-memory BFGS (CL-BFGS) algorithm is efficient for the training of complex-valued neural networks (CVNNs). As an important parameter, the memory size represents the number of saved vector pairs and would essentially affect the performance of the algorithm. However, the determination of a suitable memory size for the CL-BFGS algorithm remains challenging. To deal with this issue, an adaptive method is proposed in which the memory size is allowed to vary during the iteration process. Basically, at each iteration, with the help of multistep quasi-Newton method, an appropriate memory size is chosen from a variable set {1,2, ... , M} by approximating complex Hessian matrix as close as possible. To reduce the computational complexity and ensure desired performance, the upper bound M is adjustable according to the moving average of memory sizes found in previous iterations. The proposed adaptive CL-BFGS (ACL-BFGS) algorithm can be efficiently applied for the training of CVNNs. Moreover, it is suggested to take multiple memory sizes to construct the search direction, which further improves the performance of the ACL-BFGS algorithm. Experimental results on some benchmark problems including the pattern classification, complex function approximation, and nonlinear channel equalization problems are given to illustrate the advantages of the developed algorithms over some previous ones.

摘要

复值有限内存BFGS(CL - BFGS)算法在训练复值神经网络(CVNNs)方面效率很高。作为一个重要参数,内存大小表示保存的向量对数量,并且会对算法性能产生实质性影响。然而,为CL - BFGS算法确定合适的内存大小仍然具有挑战性。为了解决这个问题,提出了一种自适应方法,其中内存大小在迭代过程中允许变化。基本上,在每次迭代时,借助多步拟牛顿法,通过尽可能逼近复海森矩阵,从变量集{1, 2, ... , M}中选择合适的内存大小。为了降低计算复杂度并确保期望的性能,上限M可根据先前迭代中找到的内存大小的移动平均值进行调整。所提出的自适应CL - BFGS(ACL - BFGS)算法可有效地应用于CVNNs的训练。此外,建议采用多个内存大小来构建搜索方向,这进一步提高了ACL - BFGS算法的性能。给出了一些基准问题(包括模式分类、复函数逼近和非线性信道均衡问题)的实验结果,以说明所开发算法相对于一些先前算法的优势。

相似文献

1
Adaptive CL-BFGS Algorithms for Complex-Valued Neural Networks.用于复值神经网络的自适应CL-BFGS算法
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):6313-6327. doi: 10.1109/TNNLS.2021.3135553. Epub 2023 Sep 1.
2
Partial BFGS update and efficient step-length calculation for three-layer neural networks.用于三层神经网络的部分BFGS更新及有效步长计算
Neural Comput. 1997 Jan 1;9(1):123-41. doi: 10.1162/neco.1997.9.1.123.
3
Newton-Raphson preconditioner for Krylov type solvers on GPU devices.适用于GPU设备上Krylov型求解器的牛顿-拉弗森预条件器。
Springerplus. 2016 Jun 21;5(1):788. doi: 10.1186/s40064-016-2346-7. eCollection 2016.
4
Adaptive complex-valued stepsize based fast learning of complex-valued neural networks.基于自适应复数步长的复数神经网络快速学习。
Neural Netw. 2020 Apr;124:233-242. doi: 10.1016/j.neunet.2020.01.011. Epub 2020 Jan 25.
5
LM-CMA: An Alternative to L-BFGS for Large-Scale Black Box Optimization.LM-CMA:一种用于大规模黑箱优化的L-BFGS替代方法。
Evol Comput. 2017 Spring;25(1):143-171. doi: 10.1162/EVCO_a_00168. Epub 2015 Oct 1.
6
Link Between and Comparison and Combination of Zhang Neural Network and Quasi-Newton BFGS Method for Time-Varying Quadratic Minimization.张神经网络与拟牛顿 BFGS 法在时变二次最小化中的联系与比较及组合。
IEEE Trans Cybern. 2013 Apr;43(2):490-503. doi: 10.1109/TSMCB.2012.2210038. Epub 2013 Mar 7.
7
A training algorithm with selectable search direction for complex-valued feedforward neural networks.一种具有可选搜索方向的复杂值前馈神经网络训练算法。
Neural Netw. 2021 May;137:75-84. doi: 10.1016/j.neunet.2021.01.014. Epub 2021 Jan 28.
8
Fast Quasi-Newton Algorithms for Penalized Reconstruction in Emission Tomography and Further Improvements via Preconditioning.基于正则化重建的发射断层成像中的快速拟牛顿算法及预处理的进一步改进。
IEEE Trans Med Imaging. 2018 Apr;37(4):1000-1010. doi: 10.1109/TMI.2017.2786865.
9
A Stochastic Quasi-Newton Method for Large-Scale Nonconvex Optimization With Applications.一种用于大规模非凸优化的随机拟牛顿法及其应用
IEEE Trans Neural Netw Learn Syst. 2020 Nov;31(11):4776-4790. doi: 10.1109/TNNLS.2019.2957843. Epub 2020 Oct 29.
10
Discrete-Time Zhang Neural Network for Online Time-Varying Nonlinear Optimization With Application to Manipulator Motion Generation.用于在线时变非线性优化的离散时间 Zhang 神经网络及其在机器人运动生成中的应用。
IEEE Trans Neural Netw Learn Syst. 2015 Jul;26(7):1525-31. doi: 10.1109/TNNLS.2014.2342260. Epub 2014 Aug 6.