Suppr超能文献

泊松变分自编码器

Poisson Variational Autoencoder.

作者信息

Vafaii Hadi, Galor Dekel, Yates Jacob L

机构信息

UC Berkeley.

出版信息

ArXiv. 2024 Dec 9:arXiv:2405.14473v2.

Abstract

Variational autoencoders (VAEs) employ Bayesian inference to interpret sensory inputs, mirroring processes that occur in primate vision across both ventral [1] and dorsal [2] pathways. Despite their success, traditional VAEs rely on continuous latent variables, which deviates sharply from the discrete nature of biological neurons. Here, we developed the Poisson VAE ( -VAE), a novel architecture that combines principles of predictive coding with a VAE that encodes inputs into discrete spike counts. Combining Poisson-distributed latent variables with predictive coding introduces a metabolic cost term in the model loss function, suggesting a relationship with sparse coding which we verify empirically. Additionally, we analyze the geometry of learned representations, contrasting the -VAE to alternative VAE models. We find that the -VAE encodes its inputs in relatively higher dimensions, facilitating linear separability of categories in a downstream classification task with a much better (5×) sample efficiency. Our work provides an interpretable computational framework to study brain-like sensory processing and paves the way for a deeper understanding of perception as an inferential process.

摘要

变分自编码器(VAEs)采用贝叶斯推理来解释感官输入,反映了灵长类动物视觉中腹侧[1]和背侧[2]通路中发生的过程。尽管取得了成功,但传统的VAEs依赖于连续的潜在变量,这与生物神经元的离散性质有很大偏差。在这里,我们开发了泊松VAE( -VAE),这是一种新颖的架构,它将预测编码原理与将输入编码为离散脉冲计数的VAE相结合。将泊松分布的潜在变量与预测编码相结合,在模型损失函数中引入了一个代谢成本项,这表明与稀疏编码有关系,我们通过实验验证了这一点。此外,我们分析了学习表征的几何结构,将 -VAE与其他VAE模型进行了对比。我们发现, -VAE在相对较高的维度上对其输入进行编码,在下游分类任务中促进了类别的线性可分性,样本效率提高了很多(5倍)。我们的工作提供了一个可解释的计算框架来研究类脑感官处理,并为更深入地理解作为推理过程的感知铺平了道路。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f0e7/11661288/321414b1871f/nihpp-2405.14473v2-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验