Suppr超能文献

去噪对抗自编码器

Denoising Adversarial Autoencoders.

作者信息

Creswell Antonia, Bharath Anil Anthony

出版信息

IEEE Trans Neural Netw Learn Syst. 2019 Apr;30(4):968-984. doi: 10.1109/TNNLS.2018.2852738. Epub 2018 Aug 16.

Abstract

Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabeled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabeled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clean input samples from corrupted ones. Representations may be further improved by introducing regularization during training to shape the distribution of the encoded data in the latent space. We suggest denoising adversarial autoencoders (AAEs), which combine denoising and regularization, shaping the distribution of latent space using adversarial training. We introduce a novel analysis that shows how denoising may be incorporated into the training and sampling of AAEs. Experiments are performed to assess the contributions that denoising makes to the learning of representations for classification and sample synthesis. Our results suggest that autoencoders trained using a denoising criterion achieve higher classification performance and can synthesize samples that are more consistent with the input data than those trained without a corruption process.

摘要

无监督学习越来越受到关注,因为它释放了大量未标记数据中蕴含的潜力,以学习用于推理的有用表示。自动编码器作为生成模型的一种形式,可以通过学习从潜在表示空间重建未标记的输入数据来进行训练。如果自动编码器学会从损坏的输入样本中恢复干净的输入样本,那么它可能会产生更强大的表示。通过在训练过程中引入正则化来塑造潜在空间中编码数据的分布,可以进一步改进表示。我们提出了去噪对抗自动编码器(AAE),它结合了去噪和正则化,使用对抗训练来塑造潜在空间的分布。我们引入了一种新颖的分析方法,展示了如何将去噪纳入AAE的训练和采样中。进行实验以评估去噪对分类和样本合成表示学习的贡献。我们的结果表明,使用去噪准则训练的自动编码器实现了更高的分类性能,并且与没有损坏过程训练的自动编码器相比,能够合成与输入数据更一致的样本。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验