• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用逆对比损失学习不变表示。

Learning Invariant Representations using Inverse Contrastive Loss.

作者信息

Akash Aditya Kumar, Lokhande Vishnu Suresh, Ravi Sathya N, Singh Vikas

机构信息

University of Wisconsin-Madison.

University of Illinois at Chicago.

出版信息

Proc AAAI Conf Artif Intell. 2021 Feb;35(8):6582-6591. Epub 2021 May 18.

PMID:34405058
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8366266/
Abstract

Learning invariant representations is a critical first step in a number of machine learning tasks. A common approach corresponds to the so-called information bottleneck principle in which an application dependent function of mutual information is carefully chosen and optimized. Unfortunately, in practice, these functions are not suitable for optimization purposes since these losses are agnostic of the metric structure of the parameters of the model. We introduce a class of losses for learning representations that are invariant to some extraneous variable of interest by inverting the class of contrastive losses, i.e., inverse contrastive loss (ICL). We show that if the extraneous variable is binary, then optimizing ICL is equivalent to optimizing a regularized MMD divergence. More generally, we also show that if we are provided a metric on the sample space, our formulation of ICL can be decomposed into a sum of convex functions of the given distance metric. Our experimental results indicate that models obtained by optimizing ICL achieve significantly better invariance to the extraneous variable for a fixed desired level of accuracy. In a variety of experimental settings, we show applicability of ICL for learning invariant representations for both continuous and discrete extraneous variables. The project page with code is available at https://github.com/adityakumarakash/ICL.

摘要

学习不变表示是许多机器学习任务中的关键第一步。一种常见的方法对应于所谓的信息瓶颈原理,即仔细选择并优化互信息的应用相关函数。不幸的是,在实践中,这些函数不适合用于优化目的,因为这些损失与模型参数的度量结构无关。我们通过反转对比损失类,即逆对比损失(ICL),引入了一类用于学习对某些感兴趣的无关变量不变的表示的损失。我们表明,如果无关变量是二元的,那么优化ICL等同于优化正则化的最大均值差异(MMD)散度。更一般地,我们还表明,如果在样本空间上给定一个度量,我们对ICL的公式可以分解为给定距离度量的凸函数之和。我们的实验结果表明,通过优化ICL获得的模型在固定的期望精度水平下,对外在变量具有显著更好的不变性。在各种实验设置中,我们展示了ICL在学习连续和离散无关变量的不变表示方面的适用性。带有代码的项目页面可在https://github.com/adityakumarakash/ICL获取。

相似文献

1
Learning Invariant Representations using Inverse Contrastive Loss.使用逆对比损失学习不变表示。
Proc AAAI Conf Artif Intell. 2021 Feb;35(8):6582-6591. Epub 2021 May 18.
2
Augmentation-Free Graph Contrastive Learning of Invariant-Discriminative Representations.无增强的不变判别表示的图对比学习
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11157-11167. doi: 10.1109/TNNLS.2023.3248871. Epub 2024 Aug 5.
3
Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation.基于伪标签自训练的局部对比损失的半监督医学图像分割。
Med Image Anal. 2023 Jul;87:102792. doi: 10.1016/j.media.2023.102792. Epub 2023 Mar 11.
4
STACoRe: Spatio-temporal and action-based contrastive representations for reinforcement learning in Atari.STACoRe:用于雅达利强化学习的基于时空和动作对比的表示方法。
Neural Netw. 2023 Mar;160:1-11. doi: 10.1016/j.neunet.2022.12.018. Epub 2022 Dec 29.
5
Molecular property prediction by semantic-invariant contrastive learning.基于语义不变对比学习的分子性质预测。
Bioinformatics. 2023 Aug 1;39(8). doi: 10.1093/bioinformatics/btad462.
6
Hierarchically Contrastive Hard Sample Mining for Graph Self-Supervised Pretraining.用于图自监督预训练的分层对比硬样本挖掘
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16748-16761. doi: 10.1109/TNNLS.2023.3297607. Epub 2024 Oct 29.
7
Contrastive learning of heart and lung sounds for label-efficient diagnosis.用于高效标签诊断的心肺音对比学习。
Patterns (N Y). 2021 Dec 7;3(1):100400. doi: 10.1016/j.patter.2021.100400. eCollection 2022 Jan 14.
8
Mutual Information Driven Equivariant Contrastive Learning for 3D Action Representation Learning.用于3D动作表示学习的互信息驱动等变对比学习
IEEE Trans Image Process. 2024;33:1883-1897. doi: 10.1109/TIP.2024.3372451. Epub 2024 Mar 12.
9
Towards generalizable Graph Contrastive Learning: An information theory perspective.面向可泛化的图对比学习:信息论视角。
Neural Netw. 2024 Apr;172:106125. doi: 10.1016/j.neunet.2024.106125. Epub 2024 Jan 17.
10
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition.通过相互对比学习进行视觉识别的在线知识蒸馏
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):10212-10227. doi: 10.1109/TPAMI.2023.3257878. Epub 2023 Jun 30.

引用本文的文献

1
On the Versatile Uses of Partial Distance Correlation in Deep Learning.论部分距离相关在深度学习中的多种用途。
Comput Vis ECCV. 2022 Oct;13686:327-346. doi: 10.1007/978-3-031-19809-0_19. Epub 2022 Nov 1.
2
Equivariance Allows Handling Multiple Nuisance Variables When Analyzing Pooled Neuroimaging Datasets.等变性允许在分析汇总神经影像数据集时处理多个干扰变量。
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2022 Jun;2022:10422-10431. doi: 10.1109/cvpr52688.2022.01018. Epub 2022 Sep 27.

本文引用的文献

1
FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret.FairALM:用于训练遗憾值较小的公平模型的增广拉格朗日方法。
Comput Vis ECCV. 2020 Aug;12357:365-381. doi: 10.1007/978-3-030-58610-2_22. Epub 2020 Oct 7.
2
Scanner invariant representations for diffusion MRI harmonization.用于扩散磁共振成像协调的扫描仪不变表示。
Magn Reson Med. 2020 Oct;84(4):2174-2189. doi: 10.1002/mrm.28243. Epub 2020 Apr 6.
3
Statistical tests and identifiability conditions for pooling and analyzing multisite datasets.多站点数据集的合并和分析的统计检验和可识别性条件。
Proc Natl Acad Sci U S A. 2018 Feb 13;115(7):1481-1486. doi: 10.1073/pnas.1719747115. Epub 2018 Jan 31.
4
Dependence of brain DTI maps of fractional anisotropy and mean diffusivity on the number of diffusion weighting directions.脑弥散张量成像各向异性分数和平均弥散率图对弥散加权方向数的依赖性。
J Appl Clin Med Phys. 2009 Dec 23;11(1):2927. doi: 10.1120/jacmp.v11i1.2927.