Pehlevan Cengiz, Sengupta Anirvan M, Chklovskii Dmitri B
Center for Computational Biology, Flatiron Institute, New York, NY 10010, U.S.A.
Center for Computational Biology, Flatiron Institute, New York, NY 10010, U.S.A., and Physics and Astronomy Department, Rutgers University, New Brunswick, NJ 08901, U.S.A.
Neural Comput. 2018 Jan;30(1):84-124. doi: 10.1162/neco_a_01018. Epub 2017 Sep 28.
Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience. Yet derivations of single-layer networks with such local learning rules from principled optimization objectives became possible only recently, with the introduction of similarity matching objectives. What explains the success of similarity matching objectives in deriving neural networks with local learning rules? Here, using dimensionality reduction as an example, we introduce several variable substitutions that illuminate the success of similarity matching. We show that the full network objective may be optimized separately for each synapse using local learning rules in both the offline and online settings. We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem. We introduce a novel dimensionality reduction objective using fractional matrix exponents. To illustrate the generality of our approach, we apply it to a novel formulation of dimensionality reduction combined with whitening. We confirm numerically that the networks with learning rules derived from principled objectives perform better than those with heuristic learning rules.
利用赫布型和反赫布型可塑性对神经网络进行自组织建模以实现无监督学习,在神经科学领域有着悠久的历史。然而,直到最近随着相似性匹配目标的引入,才使得从有原则的优化目标推导出具有此类局部学习规则的单层网络成为可能。相似性匹配目标在推导具有局部学习规则的神经网络时取得成功的原因是什么呢?在此,以降维为例,我们引入了几个变量替换,这些替换揭示了相似性匹配的成功之处。我们表明,无论是在离线还是在线设置中,完整的网络目标都可以使用局部学习规则针对每个突触分别进行优化。我们通过制定一个极小极大优化问题,将赫布型规则和反赫布型规则之间长期存在的竞争直觉形式化。我们引入了一种使用分数矩阵指数的新型降维目标。为了说明我们方法的通用性,我们将其应用于一种结合白化的新型降维公式。我们通过数值验证,从有原则目标推导学习规则的网络比具有启发式学习规则的网络表现更好。