Dayan P, Hinton G E, Neal R M, Zemel R S
Department of Computer Science, University of Toronto, Ontario, Canada.
Neural Comput. 1995 Sep;7(5):889-904. doi: 10.1162/neco.1995.7.5.889.
Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterized stochastic generative model, independent draws from which are likely to produce the patterns. For all but the simplest generative models, each pattern can be generated in exponentially many ways. It is thus intractable to adjust the parameters to maximize the probability of the observed patterns. We describe a way of finessing this combinatorial explosion by maximizing an easily computed lower bound on the probability of the observations. Our method can be viewed as a form of hierarchical self-supervised learning that may relate to the function of bottom-up and top-down cortical processing pathways.
发现一组模式中固有的结构是统计推断或学习的一个基本目标。一种富有成效的方法是构建一个参数化的随机生成模型,从该模型中独立抽取的数据很可能产生这些模式。对于除最简单的生成模型之外的所有模型,每个模式都可以通过指数级的多种方式生成。因此,调整参数以最大化观察到的模式的概率是难以处理的。我们描述了一种通过最大化观察概率的易于计算的下界来巧妙处理这种组合爆炸的方法。我们的方法可以被视为一种分层自监督学习形式,可能与自下而上和自上而下的皮层处理通路的功能有关。