Suppr超能文献

改进Levenberg-Marquardt训练的计算。

Improved computation for Levenberg-Marquardt training.

作者信息

Wilamowski Bogdan M, Yu Hao

机构信息

Department of Electrical and Computer Engineering, Auburn University, Auburn, AL 36849-5201, USA.

出版信息

IEEE Trans Neural Netw. 2010 Jun;21(6):930-7. doi: 10.1109/TNN.2010.2045657. Epub 2010 Apr 19.

Abstract

The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm. Quasi-Hessian matrix and gradient vector are computed directly, without Jacobian matrix multiplication and storage. The memory limitation problem for LM training is solved. Considering the symmetry of quasi-Hessian matrix, only elements in its upper/lower triangular array need to be calculated. Therefore, training speed is improved significantly, not only because of the smaller array stored in memory, but also the reduced operations in quasi-Hessian matrix calculation. The improved memory and time efficiencies are especially true for large sized patterns training.

摘要

本文提出的改进计算旨在使用Levenberg-Marquardt(LM)算法优化神经网络学习过程。直接计算拟海森矩阵和梯度向量,无需进行雅可比矩阵乘法和存储。解决了LM训练的内存限制问题。考虑到拟海森矩阵的对称性,只需要计算其上下三角阵列中的元素。因此,训练速度显著提高,这不仅是因为内存中存储的阵列较小,还因为拟海森矩阵计算中的操作减少。对于大型模式训练,内存和时间效率的提高尤为明显。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验