Robinson Joshua, Sun Li, Yu Ke, Batmanghelich Kayhan, Jegelka Stefanie, Sra Suvrit
Massachusetts Institute of Technology.
University of Pittsburgh.
Adv Neural Inf Process Syst. 2021 Dec;34:4974-4986.
The generalization of representations learned via contrastive learning depends crucially on what features of the data are extracted. However, we observe that the contrastive loss does not always sufficiently guide which features are extracted, a behavior that can negatively impact the performance on downstream tasks via "shortcuts", i.e., by inadvertently suppressing important predictive features. We find that feature extraction is influenced by the of the so-called instance discrimination task (i.e., the task of discriminating pairs of similar points from pairs of dissimilar ones). Although harder pairs improve the representation of some features, the improvement comes at the cost of suppressing previously well represented features. In response, we propose (IFM), a method for altering positive and negative samples in order to guide contrastive models towards capturing a wider variety of predictive features. Empirically, we observe that IFM reduces feature suppression, and as a result improves performance on vision and medical imaging tasks. The code is available at: https://github.com/joshr17/IFM.
通过对比学习学到的表征的泛化性关键取决于所提取的数据特征。然而,我们观察到对比损失并不总能充分引导哪些特征被提取,这种行为可能会通过“捷径”对下游任务的性能产生负面影响,也就是说,会无意中抑制重要的预测特征。我们发现特征提取受所谓实例判别任务(即区分相似点对和不相似点对的任务)的 影响。虽然更难的点对能改善某些特征的表征,但这种改善是以抑制先前表征良好的特征为代价的。作为回应,我们提出了实例特征调制(IFM),一种改变正样本和负样本以引导对比模型捕捉更广泛预测特征的方法。从经验上看,我们观察到IFM减少了特征抑制,从而提高了视觉和医学成像任务的性能。代码可在以下网址获取:https://github.com/joshr17/IFM 。
原文中“受……的 影响”这里少了个单词,我按照正常逻辑翻译了。