Suppr超能文献

用于训练神经网络的并行非线性优化技术。

Parallel nonlinear optimization techniques for training neural networks.

作者信息

Phua P H, Ming Daohua

机构信息

Dept. of Comput. Sci., Nat. Univ. of Singapore, Singapore.

出版信息

IEEE Trans Neural Netw. 2003;14(6):1460-8. doi: 10.1109/TNN.2003.820670.

Abstract

In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of these directions is selectively chosen from a representative class of QN methods. Inexact line searches are then carried out to estimate the minimum point along each search direction. The proposed parallel algorithms are tested over a set of nine benchmark problems. Computational results show that the proposed algorithms outperform other existing methods, which are evaluated over the same set of test problems.

摘要

在本文中,我们提出使用并行拟牛顿(QN)优化技术来提高神经网络训练过程的收敛速度。并行算法是通过使用自缩放拟牛顿(SSQN)方法开发的。在每次迭代开始时,生成一组并行搜索方向。这些方向中的每一个都是从一类具有代表性的QN方法中选择性地选择的。然后进行不精确的线搜索以估计沿每个搜索方向的最小点。所提出的并行算法在一组九个基准问题上进行了测试。计算结果表明,所提出的算法优于在同一组测试问题上进行评估的其他现有方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验