Suppr超能文献

关于乘法神经网络的计算与学习复杂性

On the complexity of computing and learning with multiplicative neural networks.

作者信息

Schmitt Michael

机构信息

Lehrstuhl Mathematik und Informatik, Fakultät für Mathematik, Ruhr-Universität Bochum, D-44780 Bochum, Germany.

出版信息

Neural Comput. 2002 Feb;14(2):241-301. doi: 10.1162/08997660252741121.

Abstract

In a great variety of neuron models, neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units that multiply their inputs instead of summing them and thus allow inputs to interact nonlinearly. The class of multiplicative neural networks comprises such widely known and well-studied network types as higher-order networks and product unit networks. We investigate the complexity of computing and learning for multiplicative neural networks. In particular, we derive upper and lower bounds on the Vapnik-Chervonenkis (VC) dimension and the pseudo-dimension for various types of networks with multiplicative units. As the most general case, we consider feedforward networks consisting of product and sigmoidal units, showing that their pseudo-dimension is bounded from above by a polynomial with the same order of magnitude as the currently best-known bound for purely sigmoidal networks. Moreover, we show that this bound holds even when the unit type, product or sigmoidal, may be learned. Crucial for these results are calculations of solution set components bounds for new network classes. As to lower bounds, we construct product unit networks of fixed depth with super-linear VC dimension. For sigmoidal networks of higher order, we establish polynomial bounds that, in contrast to previous results, do not involve any restriction of the network order. We further consider various classes of higher-order units, also known as sigma-pi units, that are characterized by connectivity constraints. In terms of these, we derive some asymptotically tight bounds. Multiplication plays an important role in both neural modeling of biological behavior and computing and learning with artificial neural networks. We briefly survey research in biology and in applications where multiplication is considered an essential computational element. The results we present here provide new tools for assessing the impact of multiplication on the computational power and the learning capabilities of neural networks.

摘要

在各种各样的神经元模型中,神经输入是通过求和运算进行组合的。我们引入了乘法神经网络的概念,这种网络包含的单元对其输入进行乘法运算而非求和运算,从而使输入能够进行非线性交互。乘法神经网络类别包括诸如高阶网络和乘积单元网络等广为人知且经过充分研究的网络类型。我们研究乘法神经网络的计算和学习复杂度。具体而言,我们推导了具有乘法单元的各种类型网络的Vapnik - Chervonenkis(VC)维数和伪维数的上下界。作为最一般的情况,我们考虑由乘积单元和Sigmoid单元组成的前馈网络,结果表明其伪维数由一个多项式从上方界定,该多项式的量级与目前已知的纯Sigmoid网络的最佳界相同。此外,我们表明即使单元类型(乘积单元或Sigmoid单元)可以学习,该界仍然成立。这些结果的关键在于对新网络类别的解集分量界的计算。关于下界,我们构建了具有超线性VC维数的固定深度的乘积单元网络。对于高阶Sigmoid网络,我们建立了多项式界,与先前的结果不同,该界不涉及对网络阶数的任何限制。我们进一步考虑了各种类型的高阶单元,也称为sigma - pi单元,它们由连接约束来表征。据此,我们推导了一些渐近紧界。乘法在生物行为的神经建模以及人工神经网络的计算和学习中都起着重要作用。我们简要综述了生物学以及将乘法视为基本计算元素的应用方面的研究。我们在此展示的结果为评估乘法对神经网络计算能力和学习能力的影响提供了新工具。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验