• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于二元分类任务的帕累托最优数据压缩

Pareto-Optimal Data Compression for Binary Classification Tasks.

作者信息

Tegmark Max, Wu Tailin

机构信息

Department of Physics, MIT Kavli Institute & Center for Brains, Minds & Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.

出版信息

Entropy (Basel). 2019 Dec 19;22(1):7. doi: 10.3390/e22010007.

DOI:10.3390/e22010007
PMID:33285782
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7516503/
Abstract

The goal of lossy data compression is to reduce the storage cost of a data set while retaining as much information as possible about something () that you care about. For example, what aspects of an image contain the most information about whether it depicts a cat? Mathematically, this corresponds to finding a mapping X → Z ≡ f ( X ) that maximizes the mutual information I ( Z , Y ) while the entropy H ( Z ) is kept below some fixed threshold. We present a new method for mapping out the Pareto frontier for classification tasks, reflecting the tradeoff between retained entropy and class information. We first show how a random variable (an image, say) drawn from a class Y ∈ { 1 , … , n } can be distilled into a vector W = f ( X ) ∈ R n - 1 losslessly, so that I ( W , Y ) = I ( X , Y ) ; for example, for a binary classification task of cats and dogs, each image is mapped into a single real number retaining all information that helps distinguish cats from dogs. For the n = 2 case of binary classification, we then show how can be further compressed into a discrete variable Z = g β ( W ) ∈ { 1 , … , m β } by binning into m β bins, in such a way that varying the parameter β sweeps out the full Pareto frontier, solving a generalization of the discrete information bottleneck (DIB) problem. We argue that the most interesting points on this frontier are "corners" maximizing I ( Z , Y ) for a fixed number of bins m = 2 , 3 , … which can conveniently be found without multiobjective optimization. We apply this method to the CIFAR-10, MNIST and Fashion-MNIST datasets, illustrating how it can be interpreted as an information-theoretically optimal image clustering algorithm. We find that these Pareto frontiers are not concave, and that recently reported DIB phase transitions correspond to transitions between these corners, changing the number of clusters.

摘要

有损数据压缩的目标是降低数据集的存储成本,同时尽可能多地保留与你所关心的事物()相关的信息。例如,图像的哪些方面包含了关于它是否描绘了一只猫的最多信息?从数学上讲,这相当于找到一个映射X → Z ≡ f (X),在熵H (Z)保持在某个固定阈值以下的同时,最大化互信息I (Z, Y)。我们提出了一种新方法来描绘分类任务的帕累托前沿,反映保留的熵和类别信息之间的权衡。我们首先展示了从类别Y ∈ {1, …, n}中抽取的随机变量(比如一幅图像)如何能够无损地提炼为向量W = f (X) ∈ Rⁿ⁻¹,使得I (W, Y) = I (X, Y);例如,对于猫和狗的二分类任务,每幅图像都被映射为一个单一实数,保留所有有助于区分猫和狗的信息。对于二分类的n = 2情况,我们接着展示了如何通过将W划分为m_β个箱,进一步将其压缩为离散变量Z = g_β (W) ∈ {1, …, m_β},这样通过改变参数β可以扫出整个帕累托前沿,解决离散信息瓶颈(DIB)问题的一个推广。我们认为这个前沿上最有趣的点是在固定箱数m = 2, 3, …时最大化I (Z, Y)的“角点”,无需多目标优化就能方便地找到这些点。我们将此方法应用于CIFAR-10、MNIST和Fashion-MNIST数据集,说明它如何可以被解释为一种信息理论上最优的图像聚类算法。我们发现这些帕累托前沿不是凹的,并且最近报道的DIB相变对应于这些角点之间的转变,改变了聚类的数量。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/b0f495161762/entropy-22-00007-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/7e36b5da9f6a/entropy-22-00007-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/60909567055c/entropy-22-00007-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/018ca1d93894/entropy-22-00007-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/b1c1e67940fe/entropy-22-00007-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/619643e55d8e/entropy-22-00007-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/1874c49df1dc/entropy-22-00007-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/1846f3487fbf/entropy-22-00007-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/312039dc8924/entropy-22-00007-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/b0f495161762/entropy-22-00007-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/7e36b5da9f6a/entropy-22-00007-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/60909567055c/entropy-22-00007-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/018ca1d93894/entropy-22-00007-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/b1c1e67940fe/entropy-22-00007-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/619643e55d8e/entropy-22-00007-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/1874c49df1dc/entropy-22-00007-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/1846f3487fbf/entropy-22-00007-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/312039dc8924/entropy-22-00007-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aa0/7516503/b0f495161762/entropy-22-00007-g009.jpg

相似文献

1
Pareto-Optimal Data Compression for Binary Classification Tasks.用于二元分类任务的帕累托最优数据压缩
Entropy (Basel). 2019 Dec 19;22(1):7. doi: 10.3390/e22010007.
2
Pareto-Optimal Clustering with the Primal Deterministic Information Bottleneck.基于原始确定性信息瓶颈的帕累托最优聚类
Entropy (Basel). 2022 May 30;24(6):771. doi: 10.3390/e24060771.
3
The Deterministic Information Bottleneck.确定性信息瓶颈
Neural Comput. 2017 Jun;29(6):1611-1630. doi: 10.1162/NECO_a_00961. Epub 2017 Apr 14.
4
The Convex Information Bottleneck Lagrangian.凸信息瓶颈拉格朗日函数。
Entropy (Basel). 2020 Jan 14;22(1):98. doi: 10.3390/e22010098.
5
Identifying (Quasi) Equally Informative Subsets in Feature Selection Problems for Classification: A Max-Relevance Min-Redundancy Approach.在分类的特征选择问题中识别(准)等信息量子集:一种最大相关性最小冗余方法。
IEEE Trans Cybern. 2016 Jun;46(6):1424-37. doi: 10.1109/TCYB.2015.2444435. Epub 2015 Jul 6.
6
The Double-Sided Information Bottleneck Function.双面信息瓶颈函数
Entropy (Basel). 2022 Sep 19;24(9):1321. doi: 10.3390/e24091321.
7
A Collaborative Neurodynamic Approach to Multiobjective Optimization.一种用于多目标优化的协作神经动力学方法。
IEEE Trans Neural Netw Learn Syst. 2018 Nov;29(11):5738-5748. doi: 10.1109/TNNLS.2018.2806481. Epub 2018 Mar 29.
8
Development of the scintillator-based probe for fast-ion losses in the HL-2A tokamak.用于HL-2A托卡马克装置中快离子损失探测的闪烁体探测器的研制。
Rev Sci Instrum. 2014 May;85(5):053502. doi: 10.1063/1.4872385.
9
Multiobjective genetic optimization of diagnostic classifiers with implications for generating receiver operating characteristic curves.诊断分类器的多目标遗传优化及其对生成受试者工作特征曲线的意义
IEEE Trans Med Imaging. 1999 Aug;18(8):675-85. doi: 10.1109/42.796281.
10
Information Bottleneck Analysis by a Conditional Mutual Information Bound.基于条件互信息界的信息瓶颈分析。
Entropy (Basel). 2021 Jul 29;23(8):974. doi: 10.3390/e23080974.

引用本文的文献

1
Pareto-Optimal Clustering with the Primal Deterministic Information Bottleneck.基于原始确定性信息瓶颈的帕累托最优聚类
Entropy (Basel). 2022 May 30;24(6):771. doi: 10.3390/e24060771.
2
The Future of Computational Linguistics: On Beyond Alchemy.计算语言学的未来:超越炼金术
Front Artif Intell. 2021 Apr 19;4:625341. doi: 10.3389/frai.2021.625341. eCollection 2021.
3
Information Bottleneck: Theory and Applications in Deep Learning.信息瓶颈:深度学习中的理论与应用

本文引用的文献

1
Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle.使用信息瓶颈原理学习基于神经网络的分类表示。
IEEE Trans Pattern Anal Mach Intell. 2020 Sep;42(9):2225-2239. doi: 10.1109/TPAMI.2019.2909031. Epub 2019 Apr 2.
2
The Information Bottleneck and Geometric Clustering.信息瓶颈与几何聚类
Neural Comput. 2019 Mar;31(3):596-612. doi: 10.1162/neco_a_01136. Epub 2018 Oct 12.
3
The Deterministic Information Bottleneck.确定性信息瓶颈
Entropy (Basel). 2020 Dec 14;22(12):1408. doi: 10.3390/e22121408.
4
Storage Space Allocation Strategy for Digital Data with Message Importance.
Entropy (Basel). 2020 May 25;22(5):591. doi: 10.3390/e22050591.
Neural Comput. 2017 Jun;29(6):1611-1630. doi: 10.1162/NECO_a_00961. Epub 2017 Apr 14.