Suppr超能文献

两种用于训练前馈网络的高效二阶算法。

Two highly efficient second-order algorithms for training feedforward networks.

作者信息

Ampazis N, Perantonis S J

机构信息

Inst. of Informatics and Telecommun., Nat. Center for Sci. Res. "DEMOKRITOS", Athens, Greece.

出版信息

IEEE Trans Neural Netw. 2002;13(5):1064-74. doi: 10.1109/TNN.2002.1031939.

Abstract

We present two highly efficient second-order algorithms for the training of multilayer feedforward neural networks. The algorithms are based on iterations of the form employed in the Levenberg-Marquardt (LM) method for nonlinear least squares problems with the inclusion of an additional adaptive momentum term arising from the formulation of the training task as a constrained optimization problem. Their implementation requires minimal additional computations compared to a standard LM iteration. Simulations of large scale classical neural-network benchmarks are presented which reveal the power of the two methods to obtain solutions in difficult problems, whereas other standard second-order techniques (including LM) fail to converge.

摘要

我们提出了两种用于训练多层前馈神经网络的高效二阶算法。这些算法基于用于非线性最小二乘问题的Levenberg-Marquardt(LM)方法中所采用形式的迭代,并包含一个额外的自适应动量项,该项源于将训练任务表述为约束优化问题。与标准的LM迭代相比,它们的实现所需的额外计算最少。给出了大规模经典神经网络基准测试的模拟结果,这些结果揭示了这两种方法在解决难题时的强大能力,而其他标准二阶技术(包括LM)则无法收敛。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验