• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

变分自编码器中重复推理的集体动力学能够快速找到聚类结构。

Collective dynamics of repeated inference in variational autoencoder rapidly find cluster structure.

机构信息

Department of Complexity Science and Engineering, The University of Tokyo, Chiba, 277-8561, Japan.

Research Fellow of the Japan Society for the Promotion of Science, Tokyo, 102-0083, Japan.

出版信息

Sci Rep. 2020 Sep 29;10(1):16001. doi: 10.1038/s41598-020-72593-4.

DOI:10.1038/s41598-020-72593-4
PMID:32994479
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7524732/
Abstract

Deep neural networks are good at extracting low-dimensional subspaces (latent spaces) that represent the essential features inside a high-dimensional dataset. Deep generative models represented by variational autoencoders (VAEs) can generate and infer high-quality datasets, such as images. In particular, VAEs can eliminate the noise contained in an image by repeating the mapping between latent and data space. To clarify the mechanism of such denoising, we numerically analyzed how the activity pattern of trained networks changes in the latent space during inference. We considered the time development of the activity pattern for specific data as one trajectory in the latent space and investigated the collective behavior of these inference trajectories for many data. Our study revealed that when a cluster structure exists in the dataset, the trajectory rapidly approaches the center of the cluster. This behavior was qualitatively consistent with the concept retrieval reported in associative memory models. Additionally, the larger the noise contained in the data, the closer the trajectory was to a more global cluster. It was demonstrated that by increasing the number of the latent variables, the trend of the approach a cluster center can be enhanced, and the generalization ability of the VAE can be improved.

摘要

深度神经网络擅长提取低维子空间(潜在空间),这些子空间代表高维数据集中的基本特征。变分自编码器(VAEs)等深度生成模型可以生成和推断高质量数据集,如图像。特别是,VAEs 可以通过重复在潜在空间和数据空间之间的映射来消除图像中包含的噪声。为了阐明这种去噪的机制,我们通过数值分析来研究在推断过程中,经过训练的网络在潜在空间中的活动模式如何变化。我们考虑了特定数据的活动模式随时间的发展,将其视为潜在空间中的一条轨迹,并研究了许多数据的这些推断轨迹的集体行为。我们的研究表明,当数据集中存在聚类结构时,轨迹会迅速接近聚类中心。这种行为与联想记忆模型中报告的概念检索定性一致。此外,数据中包含的噪声越大,轨迹越接近更全局的聚类。结果表明,通过增加潜在变量的数量,可以增强接近聚类中心的趋势,从而提高 VAEs 的泛化能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/f26d5a3e96e1/41598_2020_72593_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/081248511958/41598_2020_72593_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/9c014bf935ba/41598_2020_72593_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/d0e58bc04058/41598_2020_72593_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/6a79cfe41394/41598_2020_72593_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/02f802f0c074/41598_2020_72593_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/4e00dd109c9a/41598_2020_72593_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/0870d2afa01d/41598_2020_72593_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/2fd35c7d22fa/41598_2020_72593_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/f2802e421ecc/41598_2020_72593_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/f26d5a3e96e1/41598_2020_72593_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/081248511958/41598_2020_72593_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/9c014bf935ba/41598_2020_72593_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/d0e58bc04058/41598_2020_72593_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/6a79cfe41394/41598_2020_72593_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/02f802f0c074/41598_2020_72593_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/4e00dd109c9a/41598_2020_72593_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/0870d2afa01d/41598_2020_72593_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/2fd35c7d22fa/41598_2020_72593_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/f2802e421ecc/41598_2020_72593_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0bcf/7524732/f26d5a3e96e1/41598_2020_72593_Fig10_HTML.jpg

相似文献

1
Collective dynamics of repeated inference in variational autoencoder rapidly find cluster structure.变分自编码器中重复推理的集体动力学能够快速找到聚类结构。
Sci Rep. 2020 Sep 29;10(1):16001. doi: 10.1038/s41598-020-72593-4.
2
An Overview of Variational Autoencoders for Source Separation, Finance, and Bio-Signal Applications.用于源分离、金融和生物信号应用的变分自编码器概述。
Entropy (Basel). 2021 Dec 28;24(1):55. doi: 10.3390/e24010055.
3
A multimodal dynamical variational autoencoder for audiovisual speech representation learning.一种用于视听语音表示学习的多模态动态变分自编码器。
Neural Netw. 2024 Apr;172:106120. doi: 10.1016/j.neunet.2024.106120. Epub 2024 Jan 11.
4
Probabilistic Autoencoder Using Fisher Information.使用费希尔信息的概率自动编码器。
Entropy (Basel). 2021 Dec 6;23(12):1640. doi: 10.3390/e23121640.
5
Organization of a Latent Space structure in VAE/GAN trained by navigation data.基于导航数据训练的 VAE/GAN 中的潜在空间结构组织。
Neural Netw. 2022 Aug;152:234-243. doi: 10.1016/j.neunet.2022.04.012. Epub 2022 Apr 20.
6
Generative Embeddings of Brain Collective Dynamics Using Variational Autoencoders.基于变分自编码器的大脑集体动力学生成嵌入。
Phys Rev Lett. 2020 Dec 4;125(23):238101. doi: 10.1103/PhysRevLett.125.238101.
7
Predicting drug polypharmacology from cell morphology readouts using variational autoencoder latent space arithmetic.基于变分自动编码器潜在空间算法从细胞形态读取结果预测药物多效性。
PLoS Comput Biol. 2022 Feb 25;18(2):e1009888. doi: 10.1371/journal.pcbi.1009888. eCollection 2022 Feb.
8
Variational image registration with learned prior using multi-stage VAEs.基于多阶段变分自动编码器的学习先验变分图像配准。
Comput Biol Med. 2024 Aug;178:108785. doi: 10.1016/j.compbiomed.2024.108785. Epub 2024 Jun 25.
9
Predictive variational autoencoder for learning robust representations of time-series data.用于学习时间序列数据稳健表示的预测变分自编码器。
ArXiv. 2023 Dec 12:arXiv:2312.06932v1.
10
Explore Protein Conformational Space With Variational Autoencoder.使用变分自编码器探索蛋白质构象空间。
Front Mol Biosci. 2021 Nov 12;8:781635. doi: 10.3389/fmolb.2021.781635. eCollection 2021.

引用本文的文献

1
Clustering Molecules at a Large Scale: Integrating Spectral Geometry with Deep Learning.大规模分子聚类:将光谱几何与深度学习相结合
Molecules. 2024 Aug 17;29(16):3902. doi: 10.3390/molecules29163902.