Suppr超能文献

深度混合生成自编码器

Deep Mixture Generative Autoencoders.

作者信息

Ye Fei, Bors Adrian G

出版信息

IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5789-5803. doi: 10.1109/TNNLS.2021.3071401. Epub 2022 Oct 5.

Abstract

Variational autoencoders (VAEs) are one of the most popular unsupervised generative models that rely on learning latent representations of data. In this article, we extend the classical concept of Gaussian mixtures into the deep variational framework by proposing a mixture of VAEs (MVAE). Each component in the MVAE model is implemented by a variational encoder and has an associated subdecoder. The separation between the latent spaces modeled by different encoders is enforced using the d -variable Hilbert-Schmidt independence criterion (dHSIC). Each component would capture different data variational features. We also propose a mechanism for finding the appropriate number of VAE components for a given task, leading to an optimal architecture. The differentiable categorical Gumbel-softmax distribution is used in order to generate dropout masking parameters within the end-to-end backpropagation training framework. Extensive experiments show that the proposed MVAE model can learn a rich latent data representation and is able to discover additional underlying data representation factors.

摘要

变分自编码器(VAE)是最流行的无监督生成模型之一,它依赖于学习数据的潜在表示。在本文中,我们通过提出变分自编码器混合模型(MVAE),将高斯混合模型的经典概念扩展到深度变分框架中。MVAE模型中的每个组件由一个变分编码器实现,并具有一个相关的子解码器。使用d变量希尔伯特-施密特独立性准则(dHSIC)来加强由不同编码器建模的潜在空间之间的分离。每个组件将捕获不同的数据变分特征。我们还提出了一种机制,用于为给定任务找到适当数量的VAE组件,从而得到最优架构。在端到端反向传播训练框架内,使用可微分类Gumbel-softmax分布来生成失活掩蔽参数。大量实验表明,所提出的MVAE模型可以学习丰富的潜在数据表示,并且能够发现其他潜在的数据表示因素。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验