Department of Physics, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
Neural Comput. 2022 Jul 14;34(8):1676-1700. doi: 10.1162/neco_a_01505.
We describe a stochastic, dynamical system capable of inference and learning in a probabilistic latent variable model. The most challenging problem in such models-sampling the posterior distribution over latent variables-is proposed to be solved by harnessing natural sources of stochasticity inherent in electronic and neural systems. We demonstrate this idea for a sparse coding model by deriving a continuous-time equation for inferring its latent variables via Langevin dynamics. The model parameters are learned by simultaneously evolving according to another continuous-time equation, thus bypassing the need for digital accumulators or a global clock. Moreover, we show that Langevin dynamics lead to an efficient procedure for sampling from the posterior distribution in the L0 sparse regime, where latent variables are encouraged to be set to zero as opposed to having a small L1 norm. This allows the model to properly incorporate the notion of sparsity rather than having to resort to a relaxed version of sparsity to make optimization tractable. Simulations of the proposed dynamical system on both synthetic and natural image data sets demonstrate that the model is capable of probabilistically correct inference, enabling learning of the dictionary as well as parameters of the prior.
我们描述了一个能够在概率潜在变量模型中进行推理和学习的随机动力学系统。该模型中最具挑战性的问题——对潜在变量的后验分布进行采样——被提议通过利用电子和神经系统中固有的自然随机性来源来解决。我们通过推导通过 Langevin 动力学推断其潜在变量的连续时间方程,为稀疏编码模型展示了这个想法。模型参数通过根据另一个连续时间方程同时演化来学习,从而避免了对数字累加器或全局时钟的需求。此外,我们表明,Langevin 动力学导致在 L0 稀疏状态下从后验分布中进行有效采样的过程,其中鼓励将潜在变量设置为零,而不是具有较小的 L1 范数。这使得模型能够正确地包含稀疏性的概念,而不必求助于稀疏性的松弛版本来使优化变得可行。对合成和自然图像数据集上的提议动力学系统的模拟表明,该模型能够进行概率正确的推理,从而能够学习字典以及先验参数。