• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

TURBO:自动编码器的瑞士军刀。

TURBO: The Swiss Knife of Auto-Encoders.

作者信息

Quétant Guillaume, Belousov Yury, Kinakh Vitaliy, Voloshynovskiy Slava

机构信息

Centre Universitaire d'Informatique, Université de Genève, Route de Drize 7, CH-1227 Carouge, Switzerland.

出版信息

Entropy (Basel). 2023 Oct 21;25(10):1471. doi: 10.3390/e25101471.

DOI:10.3390/e25101471
PMID:37895592
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10606332/
Abstract

We present a novel information-theoretic framework, termed as TURBO, designed to systematically analyse and generalise auto-encoding methods. We start by examining the principles of information bottleneck and bottleneck-based networks in the auto-encoding setting and identifying their inherent limitations, which become more prominent for data with multiple relevant, physics-related representations. The TURBO framework is then introduced, providing a comprehensive derivation of its core concept consisting of the maximisation of mutual information between various data representations expressed in two directions reflecting the information flows. We illustrate that numerous prevalent neural network models are encompassed within this framework. The paper underscores the insufficiency of the information bottleneck concept in elucidating all such models, thereby establishing TURBO as a preferable theoretical reference. The introduction of TURBO contributes to a richer understanding of data representation and the structure of neural network models, enabling more efficient and versatile applications.

摘要

我们提出了一种名为TURBO的新颖信息理论框架,旨在系统地分析和推广自动编码方法。我们首先研究自动编码设置中信息瓶颈和基于瓶颈的网络的原理,并识别它们的固有局限性,对于具有多个相关的、与物理相关的表示的数据,这些局限性变得更加突出。然后引入TURBO框架,全面推导其核心概念,该核心概念包括在反映信息流的两个方向上表达的各种数据表示之间的互信息最大化。我们说明了这个框架涵盖了许多流行的神经网络模型。本文强调了信息瓶颈概念在解释所有此类模型方面的不足,从而将TURBO确立为更可取的理论参考。TURBO的引入有助于更深入地理解数据表示和神经网络模型的结构,实现更高效和通用的应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/f29cadc26ec0/entropy-25-01471-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/afe1f836cbbd/entropy-25-01471-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/0c26ec7d34d9/entropy-25-01471-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/adea8fa78dc6/entropy-25-01471-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/dce00847b207/entropy-25-01471-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/3b400971eafa/entropy-25-01471-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/60a9552d2714/entropy-25-01471-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/cef136c3d51a/entropy-25-01471-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/c44ede12b892/entropy-25-01471-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/b56f54058da5/entropy-25-01471-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/f551b950b1fe/entropy-25-01471-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/43a1a298ca46/entropy-25-01471-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/55bf62de82a6/entropy-25-01471-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/d6d47dee4033/entropy-25-01471-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/7d6d19c5f97e/entropy-25-01471-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/cfb9454004c6/entropy-25-01471-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/e2eb7382542e/entropy-25-01471-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/f29cadc26ec0/entropy-25-01471-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/afe1f836cbbd/entropy-25-01471-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/0c26ec7d34d9/entropy-25-01471-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/adea8fa78dc6/entropy-25-01471-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/dce00847b207/entropy-25-01471-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/3b400971eafa/entropy-25-01471-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/60a9552d2714/entropy-25-01471-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/cef136c3d51a/entropy-25-01471-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/c44ede12b892/entropy-25-01471-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/b56f54058da5/entropy-25-01471-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/f551b950b1fe/entropy-25-01471-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/43a1a298ca46/entropy-25-01471-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/55bf62de82a6/entropy-25-01471-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/d6d47dee4033/entropy-25-01471-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/7d6d19c5f97e/entropy-25-01471-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/cfb9454004c6/entropy-25-01471-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/e2eb7382542e/entropy-25-01471-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ceb7/10606332/f29cadc26ec0/entropy-25-01471-g017.jpg

相似文献

1
TURBO: The Swiss Knife of Auto-Encoders.TURBO:自动编码器的瑞士军刀。
Entropy (Basel). 2023 Oct 21;25(10):1471. doi: 10.3390/e25101471.
2
Radon-Sobolev Variational Auto-Encoders.氡-索伯列夫变分自编码器
Neural Netw. 2021 Sep;141:294-305. doi: 10.1016/j.neunet.2021.04.018. Epub 2021 Apr 22.
3
Unsupervised Phonocardiogram Analysis With Distribution Density Based Variational Auto-Encoders.基于分布密度变分自编码器的无监督心音图分析
Front Med (Lausanne). 2021 Aug 5;8:655084. doi: 10.3389/fmed.2021.655084. eCollection 2021.
4
On the Difference between the Information Bottleneck and the Deep Information Bottleneck.论信息瓶颈与深度信息瓶颈之间的差异。
Entropy (Basel). 2020 Jan 22;22(2):131. doi: 10.3390/e22020131.
5
Stacked Convolutional Denoising Auto-Encoders for Feature Representation.堆叠卷积去噪自编码器的特征表示。
IEEE Trans Cybern. 2017 Apr;47(4):1017-1027. doi: 10.1109/TCYB.2016.2536638. Epub 2016 Mar 16.
6
Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams).合并树(以及持久图)的瓦瑟斯坦自编码器
IEEE Trans Vis Comput Graph. 2024 Sep;30(9):6390-6406. doi: 10.1109/TVCG.2023.3334755. Epub 2024 Jul 31.
7
Variational graph auto-encoders for miRNA-disease association prediction.基于变分图自编码器的 miRNA-疾病关联预测。
Methods. 2021 Aug;192:25-34. doi: 10.1016/j.ymeth.2020.08.004. Epub 2020 Aug 13.
8
Multi-channel auto-encoders for learning domain invariant representations enabling superior classification of histopathology images.用于学习域不变表示的多通道自动编码器,实现组织病理学图像的卓越分类。
Med Image Anal. 2023 Jan;83:102640. doi: 10.1016/j.media.2022.102640. Epub 2022 Sep 27.
9
Approximations of Shannon Mutual Information for Discrete Variables with Applications to Neural Population Coding.离散变量的香农互信息近似及其在神经群体编码中的应用
Entropy (Basel). 2019 Mar 4;21(3):243. doi: 10.3390/e21030243.
10
A Novel Framework Using Deep Auto-Encoders Based Linear Model for Data Classification.一种基于深度自动编码器的线性模型的新型数据分类框架。
Sensors (Basel). 2020 Nov 9;20(21):6378. doi: 10.3390/s20216378.

引用本文的文献

1
Hubble Meets Webb: Image-to-Image Translation in Astronomy.哈勃与韦布:天文学中的图像到图像转换
Sensors (Basel). 2024 Feb 9;24(4):1151. doi: 10.3390/s24041151.

本文引用的文献

1
To Compress or Not to Compress-Self-Supervised Learning and Information Theory: A Review.压缩还是不压缩——自监督学习与信息论:综述
Entropy (Basel). 2024 Mar 12;26(3):252. doi: 10.3390/e26030252.
2
Learning to simulate high energy particle collisions from unlabeled data.从无标签数据中学习模拟高能粒子碰撞。
Sci Rep. 2022 May 9;12(1):7567. doi: 10.1038/s41598-022-10966-7.
3
Variational Information Bottleneck for Semi-Supervised Classification.用于半监督分类的变分信息瓶颈
Entropy (Basel). 2020 Aug 27;22(9):943. doi: 10.3390/e22090943.
4
Variational Information Bottleneck for Unsupervised Clustering: Deep Gaussian Mixture Embedding.
Entropy (Basel). 2020 Feb 13;22(2):213. doi: 10.3390/e22020213.
5
Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle.使用信息瓶颈原理学习基于神经网络的分类表示。
IEEE Trans Pattern Anal Mach Intell. 2020 Sep;42(9):2225-2239. doi: 10.1109/TPAMI.2019.2909031. Epub 2019 Apr 2.
6
Information Dropout: Learning Optimal Representations Through Noisy Computation.信息丢失:通过噪声计算学习最优表示
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2897-2905. doi: 10.1109/TPAMI.2017.2784440. Epub 2018 Jan 10.