IEEE Trans Image Process. 2023;32:1245-1256. doi: 10.1109/TIP.2023.3242148.
Deep Metric Learning (DML) plays a critical role in various machine learning tasks. However, most existing deep metric learning methods with binary similarity are sensitive to noisy labels, which are widely present in real-world data. Since these noisy labels often cause a severe performance degradation, it is crucial to enhance the robustness and generalization ability of DML. In this paper, we propose an Adaptive Hierarchical Similarity Metric Learning method. It considers two noise-insensitive information, i.e., class-wise divergence and sample-wise consistency. Specifically, class-wise divergence can effectively excavate richer similarity information beyond binary in modeling by taking advantage of Hyperbolic metric learning, while sample-wise consistency can further improve the generalization ability of the model using contrastive augmentation. More importantly, we design an adaptive strategy to integrate this information in a unified view. It is noteworthy that the new method can be extended to any pair-based metric loss. Extensive experimental results on benchmark datasets demonstrate that our method achieves state-of-the-art performance compared with current deep metric learning approaches.
深度度量学习(DML)在各种机器学习任务中起着关键作用。然而,大多数现有的基于二进制相似度的深度度量学习方法对噪声标签很敏感,而噪声标签在实际数据中普遍存在。由于这些噪声标签经常导致性能严重下降,因此增强 DML 的鲁棒性和泛化能力至关重要。在本文中,我们提出了一种自适应层次相似度量学习方法。它考虑了两种对噪声不敏感的信息,即类别差异和样本一致性。具体来说,类别差异可以利用双曲度量学习有效地挖掘出比二进制模型更丰富的相似信息,而样本一致性可以使用对比增强进一步提高模型的泛化能力。更重要的是,我们设计了一种自适应策略,将这种信息统一到一个视图中。值得注意的是,新方法可以扩展到任何基于对的度量损失。在基准数据集上的广泛实验结果表明,与当前的深度度量学习方法相比,我们的方法具有最先进的性能。