Vafaii Hadi, Galor Dekel, Yates Jacob L
UC Berkeley.
ArXiv. 2025 May 16:arXiv:2410.19315v2.
Inference in both brains and machines can be formalized by optimizing a shared objective: maximizing the evidence lower bound (ELBO) in machine learning, or minimizing variational free energy in neuroscience ( ). While this equivalence suggests a unifying framework, it leaves open how inference is implemented in neural systems. Here, we show that online natural gradient descent on , under Poisson assumptions, leads to a recurrent spiking neural network that performs variational inference via membrane potential dynamics. The resulting model-the iterative Poisson variational autoencoder ( -VAE)- replaces the encoder network with local updates derived from natural gradient descent on . Theoretically, -VAE yields a number of desirable features such as emergent normalization via lateral competition, and hardware-efficient integer spike count representations. Empirically, -VAE outperforms both standard VAEs and Gaussian-based predictive coding models in sparsity, reconstruction, and biological plausibility. -VAE also exhibits strong generalization to out-of-distribution inputs, exceeding hybrid iterative-amortized VAEs. These results demonstrate how deriving inference algorithms from first principles can yield concrete architectures that are simultaneously biologically plausible and empirically effective.
在机器学习中最大化证据下界(ELBO),或者在神经科学中最小化变分自由能( )。虽然这种等价性暗示了一个统一的框架,但它没有说明推理在神经系统中是如何实现的。在这里,我们表明,在泊松假设下,对 进行在线自然梯度下降会导致一个循环脉冲神经网络,该网络通过膜电位动力学执行变分推理。由此产生的模型——迭代泊松变分自编码器( -VAE)——用从对 的自然梯度下降得出的局部更新替换了编码器网络。从理论上讲, -VAE产生了许多理想的特性,例如通过横向竞争出现的归一化,以及硬件高效的整数脉冲计数表示。从经验上讲, -VAE在稀疏性、重建和生物学合理性方面优于标准VAE和基于高斯的预测编码模型。 -VAE对分布外输入也表现出很强的泛化能力,超过了混合迭代摊销VAE。这些结果表明,从第一原理推导推理算法如何能够产生同时具有生物学合理性和经验有效性的具体架构。