Suppr超能文献

具有潜在稀疏高斯过程的全贝叶斯自动编码器

Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes.

作者信息

Tran Ba-Hien, Shahbaba Babak, Mandt Stephan, Filippone Maurizio

机构信息

Department of Data Science, EURECOM, France.

Departments of Statistics and Computer Science, University of California, Irvine, USA.

出版信息

Proc Mach Learn Res. 2023 Jul;202:34409-34430.

Abstract

We present a fully Bayesian autoencoder model that treats both local latent variables and global decoder parameters in a Bayesian fashion. This approach allows for flexible priors and posterior approximations while keeping the inference costs low. To achieve this, we introduce an amortized MCMC approach by utilizing an implicit stochastic network to learn sampling from the posterior over local latent variables. Furthermore, we extend the model by incorporating a Sparse Gaussian Process prior over the latent space, allowing for a fully Bayesian treatment of inducing points and kernel hyperparameters and leading to improved scalability. Additionally, we enable Deep Gaussian Process priors on the latent space and the handling of missing data. We evaluate our model on a range of experiments focusing on dynamic representation learning and generative modeling, demonstrating the strong performance of our approach in comparison to existing methods that combine Gaussian Processes and autoencoders.

摘要

我们提出了一种全贝叶斯自动编码器模型,该模型以贝叶斯方式处理局部潜在变量和全局解码器参数。这种方法允许使用灵活的先验和后验近似,同时保持推理成本较低。为了实现这一点,我们通过利用隐式随机网络来学习从局部潜在变量的后验中进行采样,引入了一种摊销马尔可夫链蒙特卡罗方法。此外,我们通过在潜在空间上纳入稀疏高斯过程先验来扩展模型,从而实现对诱导点和核超参数的全贝叶斯处理,并提高了可扩展性。此外,我们在潜在空间上启用了深度高斯过程先验并处理缺失数据。我们在一系列专注于动态表示学习和生成建模的实验中评估了我们的模型,证明了与结合高斯过程和自动编码器的现有方法相比,我们的方法具有强大的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验