• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

卷积变分自编码器:概念表示学习。

ConcVAE: Conceptual Representation Learning.

作者信息

Togo Ren, Nakagawa Nao, Ogawa Takahiro, Haseyama Miki

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):7529-7541. doi: 10.1109/TNNLS.2024.3404496. Epub 2025 Apr 4.

DOI:10.1109/TNNLS.2024.3404496
PMID:38959142
Abstract

Disentangled representation learning aims at obtaining an independent latent representation without supervisory signals. However, the independence of a representation does not guarantee interpretability to match human intuition in the unsupervised settings. In this article, we introduce conceptual representation learning, an unsupervised strategy to learn a representation and its concepts. An antonym pair forms a concept, which determines the semantically meaningful axes in the latent space. Since the connection between signifying words and signified notions is arbitrary in natural languages, the verbalization of data features makes the representation make sense to humans. We thus construct Conceptual VAE (ConcVAE), a variational autoencoder (VAE)-based generative model with an explicit process in which the semantic representation of data is generated via trainable concepts. In visual data, ConcVAE utilizes natural language arbitrariness as an inductive bias of unsupervised learning by using a vision-language pretraining, which can tell an unsupervised model what makes sense to humans. Qualitative and quantitative evaluations show that the conceptual inductive bias in ConcVAE effectively disentangles the latent representation in a sense-making manner without supervision. Code is available at https://github.com/ganmodokix/concvae.

摘要

解缠表示学习旨在在无监督信号的情况下获得独立的潜在表示。然而,在无监督设置中,一种表示的独立性并不能保证其可解释性符合人类直觉。在本文中,我们介绍了概念表示学习,这是一种学习表示及其概念的无监督策略。一对反义词构成一个概念,该概念决定了潜在空间中语义上有意义的轴。由于在自然语言中,表意词和所指概念之间的联系是任意的,因此数据特征的语言表达使表示对人类有意义。因此,我们构建了概念变分自编码器(ConcVAE),这是一种基于变分自编码器(VAE)的生成模型,具有一个明确的过程,即通过可训练的概念生成数据的语义表示。在视觉数据中,ConcVAE通过使用视觉-语言预训练,将自然语言的任意性用作无监督学习的归纳偏差,这可以告诉无监督模型什么对人类有意义。定性和定量评估表明,ConcVAE中的概念归纳偏差在无监督的情况下以有意义的方式有效地解缠了潜在表示。代码可在https://github.com/ganmodokix/concvae获取。

相似文献

1
ConcVAE: Conceptual Representation Learning.卷积变分自编码器:概念表示学习。
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):7529-7541. doi: 10.1109/TNNLS.2024.3404496. Epub 2025 Apr 4.
2
A multimodal dynamical variational autoencoder for audiovisual speech representation learning.一种用于视听语音表示学习的多模态动态变分自编码器。
Neural Netw. 2024 Apr;172:106120. doi: 10.1016/j.neunet.2024.106120. Epub 2024 Jan 11.
3
Representation learning of resting state fMRI with variational autoencoder.基于变分自编码器的静息态 fMRI 表示学习。
Neuroimage. 2021 Nov 1;241:118423. doi: 10.1016/j.neuroimage.2021.118423. Epub 2021 Jul 23.
4
A variational autoencoder trained with priors from canonical pathways increases the interpretability of transcriptome data.基于经典通路先验信息训练的变分自编码器提高了转录组数据的可解释性。
PLoS Comput Biol. 2024 Jul 3;20(7):e1011198. doi: 10.1371/journal.pcbi.1011198. eCollection 2024 Jul.
5
Disentangled deep generative models reveal coding principles of the human face processing network.解缠深度生成模型揭示了人类面部处理网络的编码原理。
PLoS Comput Biol. 2024 Feb 26;20(2):e1011887. doi: 10.1371/journal.pcbi.1011887. eCollection 2024 Feb.
6
Small molecule generation via disentangled representation learning.通过解缠表征学习生成小分子
Bioinformatics. 2022 Jun 13;38(12):3200-3208. doi: 10.1093/bioinformatics/btac296.
7
A Discriminative Cross-Aligned Variational Autoencoder for Zero-Shot Learning.用于零样本学习的判别式交叉对齐变分自编码器
IEEE Trans Cybern. 2023 Jun;53(6):3794-3805. doi: 10.1109/TCYB.2022.3164142. Epub 2023 May 17.
8
Deep Clustering Analysis via Dual Variational Autoencoder With Spherical Latent Embeddings.基于具有球形潜在嵌入的对偶变分自编码器的深度聚类分析
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):6303-6312. doi: 10.1109/TNNLS.2021.3135460. Epub 2023 Sep 1.
9
Development of a β-Variational Autoencoder for Disentangled Latent Space Representation of Anterior Segment Optical Coherence Tomography Images.用于前节光学相干断层扫描图像去纠缠潜在空间表示的 β 变分自动编码器的开发。
Transl Vis Sci Technol. 2022 Feb 1;11(2):11. doi: 10.1167/tvst.11.2.11.
10
Disentangled Representation Learning and Generation With Manifold Optimization.基于流形优化的解缠表示学习与生成
Neural Comput. 2022 Sep 12;34(10):2009-2036. doi: 10.1162/neco_a_01528.