Peterson G E, St Clair D C, Aylward S R, Bond W E
McDonnell Douglas Corp., St. Louis, MO.
IEEE Trans Neural Netw. 1995;6(4):949-61. doi: 10.1109/72.392257.
A significant problem in the design and construction of an artificial neural network for function approximation is limiting the magnitude and the variance of errors when the network is used in the field. Network errors can occur when the training data does not faithfully represent the required function due to noise or low sampling rates, when the network's flexibility does not match the variability of the data, or when the input data to the resultant network is noisy. This paper reports on several experiments whose purpose was to rank the relative significance of these error sources and thereby find neural network design principles for limiting the magnitude and variance of network errors.
在设计和构建用于函数逼近的人工神经网络时,一个重大问题是当该网络在实际应用中时,要限制误差的大小和方差。当训练数据由于噪声或低采样率而不能如实地表示所需函数时,当网络的灵活性与数据的变异性不匹配时,或者当输入到最终网络的数据有噪声时,都会出现网络误差。本文报告了几项实验,其目的是对这些误差源的相对重要性进行排序,从而找到限制网络误差大小和方差的神经网络设计原则。