• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于训练神经网络的并行非线性优化技术。

Parallel nonlinear optimization techniques for training neural networks.

作者信息

Phua P H, Ming Daohua

机构信息

Dept. of Comput. Sci., Nat. Univ. of Singapore, Singapore.

出版信息

IEEE Trans Neural Netw. 2003;14(6):1460-8. doi: 10.1109/TNN.2003.820670.

DOI:10.1109/TNN.2003.820670
PMID:18244591
Abstract

In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of these directions is selectively chosen from a representative class of QN methods. Inexact line searches are then carried out to estimate the minimum point along each search direction. The proposed parallel algorithms are tested over a set of nine benchmark problems. Computational results show that the proposed algorithms outperform other existing methods, which are evaluated over the same set of test problems.

摘要

在本文中,我们提出使用并行拟牛顿(QN)优化技术来提高神经网络训练过程的收敛速度。并行算法是通过使用自缩放拟牛顿(SSQN)方法开发的。在每次迭代开始时,生成一组并行搜索方向。这些方向中的每一个都是从一类具有代表性的QN方法中选择性地选择的。然后进行不精确的线搜索以估计沿每个搜索方向的最小点。所提出的并行算法在一组九个基准问题上进行了测试。计算结果表明,所提出的算法优于在同一组测试问题上进行评估的其他现有方法。

相似文献

1
Parallel nonlinear optimization techniques for training neural networks.用于训练神经网络的并行非线性优化技术。
IEEE Trans Neural Netw. 2003;14(6):1460-8. doi: 10.1109/TNN.2003.820670.
2
Faster Stochastic Quasi-Newton Methods.更快的随机拟牛顿法
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4388-4397. doi: 10.1109/TNNLS.2021.3056947. Epub 2022 Aug 31.
3
Subsampled Hessian Newton Methods for Supervised Learning.用于监督学习的子采样海森牛顿法
Neural Comput. 2015 Aug;27(8):1766-95. doi: 10.1162/NECO_a_00751. Epub 2015 Jun 16.
4
Parallel Coordinate Descent Newton Method for Efficient L -Regularized Loss Minimization.用于高效 L -正则化损失最小化的并行坐标下降牛顿法
IEEE Trans Neural Netw Learn Syst. 2019 Nov;30(11):3233-3245. doi: 10.1109/TNNLS.2018.2889976. Epub 2019 Mar 6.
5
Asynchronous Parallel Stochastic Quasi-Newton Methods.异步并行随机拟牛顿法
Parallel Comput. 2021 Apr;101. doi: 10.1016/j.parco.2020.102721. Epub 2020 Nov 4.
6
Adaptive CL-BFGS Algorithms for Complex-Valued Neural Networks.用于复值神经网络的自适应CL-BFGS算法
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):6313-6327. doi: 10.1109/TNNLS.2021.3135553. Epub 2023 Sep 1.
7
Quasi-Newton parallel geometry optimization methods.拟牛顿并行几何优化方法。
J Chem Phys. 2010 Jul 21;133(3):034116. doi: 10.1063/1.3455719.
8
A novel recurrent neural network for solving nonlinear optimization problems with inequality constraints.一种用于求解具有不等式约束的非线性优化问题的新型递归神经网络。
IEEE Trans Neural Netw. 2008 Aug;19(8):1340-53. doi: 10.1109/TNN.2008.2000273.
9
A new neural network for solving nonlinear projection equations.一种用于求解非线性投影方程的新型神经网络。
Neural Netw. 2007 Jul;20(5):577-89. doi: 10.1016/j.neunet.2007.01.001. Epub 2007 Feb 11.
10
Enhanced training algorithms, and integrated training/architecture selection for multilayer perceptron networks.
IEEE Trans Neural Netw. 1992;3(6):864-75. doi: 10.1109/72.165589.

引用本文的文献

1
A Computational Method for Optimizing Experimental Environments for Phellinus igniarius via Genetic Algorithm and BP Neural Network.一种基于遗传算法和BP神经网络优化桑黄实验环境的计算方法。
Biomed Res Int. 2016;2016:4374603. doi: 10.1155/2016/4374603. Epub 2016 Aug 9.
2
Multi-Sensor Data Fusion Identification for Shearer Cutting Conditions Based on Parallel Quasi-Newton Neural Networks and the Dempster-Shafer Theory.基于并行拟牛顿神经网络和Dempster-Shafer理论的采煤机截割状态多传感器数据融合识别
Sensors (Basel). 2015 Nov 13;15(11):28772-95. doi: 10.3390/s151128772.