Webb A R
Defence Res. Inst., Great Malvern.
IEEE Trans Neural Netw. 1994;5(3):363-71. doi: 10.1109/72.286908.
This paper considers a least-squares approach to function approximation and generalization. The particular problem addressed is one in which the training data are noiseless and the requirement is to define a mapping that approximates the data and that generalizes to situations in which data samples are corrupted by noise in the input variables. The least-squares approach produces a generalizer that has the form of a radial basis function network for a finite number of training samples. The finite sample approximation is valid provided that the perturbations due to noise on the expected operating conditions are large compared to the sample spacing in the data space. In the other extreme of small noise perturbations, a particular parametric form must be assumed for the generalizer. It is shown that better generalization will occur if the error criterion used in training the generalizer is modified by the addition of a specific regularization term. This is illustrated by an approximator that has a feedforward architecture and is applied to the problem of point-source location using the outputs of an array of receivers in the focal-plane of a lens.
本文考虑一种用于函数逼近和泛化的最小二乘法。所解决的特定问题是,训练数据无噪声,要求定义一个映射,该映射能逼近数据并能推广到输入变量中的数据样本被噪声破坏的情况。对于有限数量的训练样本,最小二乘法产生一个具有径向基函数网络形式的泛化器。只要预期运行条件下噪声引起的扰动相对于数据空间中的样本间距较大,有限样本逼近就是有效的。在噪声扰动较小的另一种极端情况下,必须为泛化器假设一种特定的参数形式。结果表明,如果在训练泛化器时使用的误差准则通过添加一个特定的正则化项进行修改,将会出现更好的泛化效果。这通过一个具有前馈架构的逼近器得到说明,该逼近器应用于使用透镜焦平面中一组接收器的输出进行点源定位的问题。