Bourlard H, Kamp Y
Philips Research Laboratory, Brussels, Belgium.
Biol Cybern. 1988;59(4-5):291-4. doi: 10.1007/BF00332918.
The multilayer perceptron, when working in auto-association mode, is sometimes considered as an interesting candidate to perform data compression or dimensionality reduction of the feature space in information processing applications. The present paper shows that, for auto-association, the nonlinearities of the hidden units are useless and that the optimal parameter values can be derived directly by purely linear techniques relying on singular value decomposition and low rank matrix approximation, similar in spirit to the well-known Karhunen-Loève transform. This approach appears thus as an efficient alternative to the general error back-propagation algorithm commonly used for training multilayer perceptrons. Moreover, it also gives a clear interpretation of the rôle of the different parameters.
多层感知器在自动关联模式下工作时,有时被认为是信息处理应用中执行数据压缩或特征空间降维的一个有趣候选方案。本文表明,对于自动关联而言,隐藏单元的非线性是无用的,并且最优参数值可以通过依赖奇异值分解和低秩矩阵近似的纯线性技术直接推导得出,这在本质上与著名的卡尔胡宁 - 勒夫变换类似。因此,这种方法似乎是通常用于训练多层感知器的通用误差反向传播算法的一种有效替代方案。此外,它还对不同参数的作用给出了清晰的解释。