Department of Earth and Ocean Sciences, University of British Columbia, Canada.
Neural Netw. 2011 Mar;24(2):159-70. doi: 10.1016/j.neunet.2010.10.001. Epub 2010 Oct 15.
By means of mathematical analysis and numerical experimentation, this study shows that the problems of non-uniqueness of solutions and data over-fitting, that plague the multilayer feedforward neural network for NonLinear Principal Component Analysis (NLPCA), are caused by inappropriate architecture of the neural network. A simplified two-hidden-layer feedforward neural network, which has no encoding layer and no bias term in the mathematical definitions of bottleneck and output neurons, is proposed to conduct NLPCA. This new, compact NLPCA model alleviates the aforementioned problems encountered when using the more complex neural network architecture for NLPCA. The numerical experiments are based on a data set generated from a well-known nonlinear system, the Lorenz chaotic attractor. Given the same number of bottleneck neurons or reduced dimensions, the compact NLPCA model effectively characterizes and represents the Lorenz attractor with significantly fewer parameters than the relevant three-hidden-layer feedforward neural network for NLPCA.
通过数学分析和数值实验,本研究表明,困扰多层前馈神经网络非线性主成分分析(NLPCA)的解非唯一性和数据过拟合问题是由神经网络的不当结构引起的。提出了一种简化的两层前馈神经网络,在瓶颈和输出神经元的数学定义中没有编码层和偏置项,用于进行 NLPCA。这种新的、紧凑的 NLPCA 模型缓解了当使用更复杂的神经网络结构进行 NLPCA 时遇到的上述问题。数值实验基于来自著名非线性系统洛伦兹混沌吸引子的数据集。在瓶颈神经元数量相同或降维的情况下,与相关的三层前馈神经网络 NLPCA 相比,紧凑的 NLPCA 模型使用显著更少的参数有效地描述和表示了洛伦兹吸引子。