Jankovic Marko V, Ogawa Hidemitsu
Electrical Engineering Institute "Nikola Tesla," 11000 Belgrade, Serbia and Montenegro.
IEEE Trans Neural Netw. 2006 Mar;17(2):345-56. doi: 10.1109/TNN.2005.863455.
This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method.
本文对最近提出的调制赫布 - 奥雅(MHO)方法进行了分析,该方法可执行到低维子空间的线性映射。主成分子空间就是要分析的方法。与其他一些用于生成主成分子空间的知名方法(例如奥雅子空间学习算法)相比,该方法具有一个从生物学角度看可能是理想的特征——突触效能学习规则在进行个体效能修改时不需要关于其他效能值的明确信息。此外,执行全局计算的“神经回路”的简单性以及其数量不依赖于输入和输出神经元数量这一事实,也可被视为该方法的良好特征。