Suppr超能文献

视觉域适应中最大均值差异的重新思考

Rethinking Maximum Mean Discrepancy for Visual Domain Adaptation.

作者信息

Wang Wei, Li Haojie, Ding Zhengming, Nie Feiping, Chen Junyang, Dong Xiao, Wang Zhihui

出版信息

IEEE Trans Neural Netw Learn Syst. 2023 Jan;34(1):264-277. doi: 10.1109/TNNLS.2021.3093468. Epub 2023 Jan 5.

Abstract

Existing domain adaptation approaches often try to reduce distribution difference between source and target domains and respect domain-specific discriminative structures by some distribution [e.g., maximum mean discrepancy (MMD)] and discriminative distances (e.g., intra-class and inter-class distances). However, they usually consider these losses together and trade off their relative importance by estimating parameters empirically. It is still under insufficient exploration so far to deeply study their relationships to each other so that we cannot manipulate them correctly and the model's performance degrades. To this end, this article theoretically proves two essential facts: 1) minimizing MMD equals to jointly minimizing their data variance with some implicit weights but, respectively, maximizing the source and target intra-class distances so that feature discriminability degrades and 2) the relationship between intra-class and inter-class distances is as one falls and another rises. Based on this, we propose a novel discriminative MMD with two parallel strategies to correctly restrain the degradation of feature discriminability or the expansion of intra-class distance; specifically: 1) we directly impose a tradeoff parameter on the intra-class distance that is implicit in the MMD according to 1) and 2) we reformulate the inter-class distance with special weights that are analogical to those implicit ones in the MMD and maximizing it can also lead to the intra-class distance falling according to 2). Notably, we do not consider the two strategies in one model due to 2). The experiments on several benchmark datasets not only prove the validity of our revealed theoretical results but also demonstrate that the proposed approach could perform better than some compared state-of-art methods substantially. Our preliminary MATLAB code will be available at https://github.com/WWLoveTransfer/.

摘要

现有的域适应方法通常试图减少源域和目标域之间的分布差异,并通过一些分布(例如最大均值差异(MMD))和判别距离(例如类内和类间距离)来尊重特定域的判别结构。然而,它们通常将这些损失一起考虑,并通过经验估计参数来权衡它们的相对重要性。到目前为止,对于深入研究它们之间的相互关系仍缺乏充分的探索,因此我们无法正确地操纵它们,导致模型性能下降。为此,本文从理论上证明了两个基本事实:1)最小化MMD等于用一些隐式权重联合最小化它们的数据方差,但分别最大化源域和目标域的类内距离,从而导致特征可辨别性下降;2)类内距离和类间距离之间的关系是一降一升。基于此,我们提出了一种新颖的判别式MMD,采用两种并行策略来正确抑制特征可辨别性的下降或类内距离的扩大;具体来说:1)根据1),我们直接在MMD中隐含的类内距离上施加一个权衡参数;2)我们用类似于MMD中隐含权重的特殊权重重新制定类间距离,最大化它也会导致类内距离根据2)下降。值得注意的是,由于2),我们不会在一个模型中同时考虑这两种策略。在几个基准数据集上的实验不仅证明了我们揭示的理论结果的有效性,还表明所提出的方法在性能上可以显著优于一些比较的现有先进方法。我们的初步MATLAB代码将在https://github.com/WWLoveTransfer/上提供。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验