Department of Communications, Computer, and System Sciences (DIST), University of Genoa, Via Opera Pia 13, 16145 Genova, Italy.
Neural Netw. 2011 Mar;24(2):171-82. doi: 10.1016/j.neunet.2010.10.002. Epub 2010 Nov 19.
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator.
神经网络提供了比传统线性回归更灵活的函数逼近。在线性回归中,只能调整固定函数集合(如正交多项式或 Hermite 函数)的线性组合中的系数,而对于神经网络,可以调整正在组合的函数的参数。然而,线性逼近器的一些有用属性(如最佳逼近算子的唯一性、齐次性和连续性)不满足神经网络的要求。此外,神经网络中参数的优化比在线性回归中更困难。实验结果表明,神经网络的这些缺点被大大降低的模型复杂性所抵消,即使在高维情况下也允许逼近的准确性。我们给出了一些理论结果,比较了两种类型的逼近器(传统的线性逼近器和所谓的变基类型,包括神经网络、径向基和核模型)对模型复杂性的要求。我们将变基逼近的最坏情况误差的上界与任何线性逼近器的此类误差的下界进行了比较。使用针对计算单元定制的非线性逼近和积分表示方法,我们描述了一些情况下神经网络优于任何线性逼近器的情况。