Suppr超能文献

一种推导规模界限并训练多层神经网络的简单方法。

A simple method to derive bounds on the size and to train multilayer neural networks.

作者信息

Sartori M A, Antsaklis P J

机构信息

Dept. of Electr. Eng., Notre Dame Univ., IN.

出版信息

IEEE Trans Neural Netw. 1991;2(4):467-71. doi: 10.1109/72.88168.

Abstract

A new derivation is presented for the bounds on the size of a multilayer neural network to exactly implement an arbitrary training set; namely the training set can be implemented with zero error with two layers and with the number of the hidden-layer neurons equal to #1>/= p-1. The derivation does not require the separation of the input space by particular hyperplanes, as in previous derivations. The weights for the hidden layer can be chosen almost arbitrarily, and the weights for the output layer can be found by solving #1+1 linear equations. The method presented exactly solves (M), the multilayer neural network training problem, for any arbitrary training set.

摘要

本文给出了一个新的推导,用于确定多层神经网络规模的边界,以精确实现任意训练集;也就是说,对于两层神经网络,当隐藏层神经元数量满足#1>/= p - 1时,训练集可以零误差实现。与之前的推导不同,该推导不需要通过特定超平面来分离输入空间。隐藏层的权重几乎可以任意选择,输出层的权重可以通过求解#1 + 1个线性方程得到。所提出的方法能针对任何任意训练集精确求解(M),即多层神经网络训练问题。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验