• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

关于乘法神经网络的计算与学习复杂性

On the complexity of computing and learning with multiplicative neural networks.

作者信息

Schmitt Michael

机构信息

Lehrstuhl Mathematik und Informatik, Fakultät für Mathematik, Ruhr-Universität Bochum, D-44780 Bochum, Germany.

出版信息

Neural Comput. 2002 Feb;14(2):241-301. doi: 10.1162/08997660252741121.

DOI:10.1162/08997660252741121
PMID:11802913
Abstract

In a great variety of neuron models, neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units that multiply their inputs instead of summing them and thus allow inputs to interact nonlinearly. The class of multiplicative neural networks comprises such widely known and well-studied network types as higher-order networks and product unit networks. We investigate the complexity of computing and learning for multiplicative neural networks. In particular, we derive upper and lower bounds on the Vapnik-Chervonenkis (VC) dimension and the pseudo-dimension for various types of networks with multiplicative units. As the most general case, we consider feedforward networks consisting of product and sigmoidal units, showing that their pseudo-dimension is bounded from above by a polynomial with the same order of magnitude as the currently best-known bound for purely sigmoidal networks. Moreover, we show that this bound holds even when the unit type, product or sigmoidal, may be learned. Crucial for these results are calculations of solution set components bounds for new network classes. As to lower bounds, we construct product unit networks of fixed depth with super-linear VC dimension. For sigmoidal networks of higher order, we establish polynomial bounds that, in contrast to previous results, do not involve any restriction of the network order. We further consider various classes of higher-order units, also known as sigma-pi units, that are characterized by connectivity constraints. In terms of these, we derive some asymptotically tight bounds. Multiplication plays an important role in both neural modeling of biological behavior and computing and learning with artificial neural networks. We briefly survey research in biology and in applications where multiplication is considered an essential computational element. The results we present here provide new tools for assessing the impact of multiplication on the computational power and the learning capabilities of neural networks.

摘要

在各种各样的神经元模型中,神经输入是通过求和运算进行组合的。我们引入了乘法神经网络的概念,这种网络包含的单元对其输入进行乘法运算而非求和运算,从而使输入能够进行非线性交互。乘法神经网络类别包括诸如高阶网络和乘积单元网络等广为人知且经过充分研究的网络类型。我们研究乘法神经网络的计算和学习复杂度。具体而言,我们推导了具有乘法单元的各种类型网络的Vapnik - Chervonenkis(VC)维数和伪维数的上下界。作为最一般的情况,我们考虑由乘积单元和Sigmoid单元组成的前馈网络,结果表明其伪维数由一个多项式从上方界定,该多项式的量级与目前已知的纯Sigmoid网络的最佳界相同。此外,我们表明即使单元类型(乘积单元或Sigmoid单元)可以学习,该界仍然成立。这些结果的关键在于对新网络类别的解集分量界的计算。关于下界,我们构建了具有超线性VC维数的固定深度的乘积单元网络。对于高阶Sigmoid网络,我们建立了多项式界,与先前的结果不同,该界不涉及对网络阶数的任何限制。我们进一步考虑了各种类型的高阶单元,也称为sigma - pi单元,它们由连接约束来表征。据此,我们推导了一些渐近紧界。乘法在生物行为的神经建模以及人工神经网络的计算和学习中都起着重要作用。我们简要综述了生物学以及将乘法视为基本计算元素的应用方面的研究。我们在此展示的结果为评估乘法对神经网络计算能力和学习能力的影响提供了新工具。

相似文献

1
On the complexity of computing and learning with multiplicative neural networks.关于乘法神经网络的计算与学习复杂性
Neural Comput. 2002 Feb;14(2):241-301. doi: 10.1162/08997660252741121.
2
Neural networks with local receptive fields and superlinear VC dimension.具有局部感受野和超线性VC维的神经网络。
Neural Comput. 2002 Apr;14(4):919-56. doi: 10.1162/089976602317319018.
3
Descartes' rule of signs for radial basis function neural networks.径向基函数神经网络的笛卡尔符号法则。
Neural Comput. 2002 Dec;14(12):2997-3011. doi: 10.1162/089976602760805386.
4
Evolutionary product unit based neural networks for regression.基于进化乘积单元的回归神经网络。
Neural Netw. 2006 May;19(4):477-86. doi: 10.1016/j.neunet.2005.11.001. Epub 2006 Feb 14.
5
On the capabilities of higher-order neurons: a radial basis function approach.论高阶神经元的能力:一种径向基函数方法。
Neural Comput. 2005 Mar;17(3):715-29. doi: 10.1162/0899766053019953.
6
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
7
General-purpose computation with neural networks: a survey of complexity theoretic results.神经网络的通用计算:复杂性理论结果综述
Neural Comput. 2003 Dec;15(12):2727-78. doi: 10.1162/089976603322518731.
8
Complexity and non-commutativity of learning operations on graphs.图上学习操作的复杂性和非交换性。
Biosystems. 2006 Jul;85(1):84-93. doi: 10.1016/j.biosystems.2006.03.001. Epub 2006 May 9.
9
The Vapnik-Chervonenkis dimension of graph and recursive neural networks.图和递归神经网络的 Vapnik-Chervonenkis 维数。
Neural Netw. 2018 Dec;108:248-259. doi: 10.1016/j.neunet.2018.08.010. Epub 2018 Sep 1.
10
Linear constraints on weight representation for generalized learning of multilayer networks.多层网络广义学习中权重表示的线性约束
Neural Comput. 2001 Dec;13(12):2851-63. doi: 10.1162/089976601317098556.

引用本文的文献

1
Single Neuron for Solving XOR like Nonlinear Problems.用于解决类似异或的非线性问题的单个神经元。
Comput Intell Neurosci. 2022 Apr 28;2022:9097868. doi: 10.1155/2022/9097868. eCollection 2022.
2
An Evolutionary Field Theorem: Evolutionary Field Optimization in Training of Power-Weighted Multiplicative Neurons for Nitrogen Oxides-Sensitive Electronic Nose Applications.一种进化场定理:在用于氮氧化物敏感电子鼻应用的功率加权乘法神经元的训练中进行进化场优化。
Sensors (Basel). 2022 May 18;22(10):3836. doi: 10.3390/s22103836.
3
Using machine learning methods to determine a typology of patients with HIV-HCV infection to be treated with antivirals.
利用机器学习方法确定接受抗病毒治疗的 HIV-HCV 感染患者的分类。
PLoS One. 2020 Jan 10;15(1):e0227188. doi: 10.1371/journal.pone.0227188. eCollection 2020.
4
Heaviness perception. IV. Weight x aperture -1 as a heaviness model in finger-grasp perception.重量感知。四、重量×孔径 -1 作为手指抓握感知中的重量模型。
Exp Brain Res. 2003 Dec;153(3):297-301. doi: 10.1007/s00221-003-1622-2. Epub 2003 Sep 12.