Suppr超能文献

深入领域迁移:通过依赖正则化进行迁移学习。

Deep Into the Domain Shift: Transfer Learning Through Dependence Regularization.

作者信息

Ma Shumin, Yuan Zhiri, Wu Qi, Huang Yiyan, Hu Xixu, Leung Cheuk Hang, Wang Dongdong, Huang Zhixiang

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):14409-14423. doi: 10.1109/TNNLS.2023.3279099. Epub 2024 Oct 7.

Abstract

Classical domain adaptation methods acquire transferability by regularizing the overall distributional discrepancies between features in the source domain (labeled) and features in the target domain (unlabeled). They often do not differentiate whether the domain differences come from the marginals or the dependence structures. In many business and financial applications, the labeling function usually has different sensitivities to the changes in the marginals versus changes in the dependence structures. Measuring the overall distributional differences will not be discriminative enough in acquiring transferability. Without the needed structural resolution, the learned transfer is less optimal. This article proposes a new domain adaptation approach in which one can measure the differences in the internal dependence structure separately from those in the marginals. By optimizing the relative weights among them, the new regularization strategy greatly relaxes the rigidness of the existing approaches. It allows a learning machine to pay special attention to places where the differences matter the most. Experiments on three real-world datasets show that the improvements are quite notable and robust compared to various benchmark domain adaptation models.

摘要

经典的域适应方法通过对源域(有标签)和目标域(无标签)中特征的整体分布差异进行正则化来获得可迁移性。它们通常不区分域差异是来自边缘分布还是依赖结构。在许多商业和金融应用中,标记函数对边缘分布变化和依赖结构变化的敏感度通常不同。衡量整体分布差异在获取可迁移性方面的判别力不足。如果没有所需的结构分辨率,所学习到的迁移效果就不够理想。本文提出了一种新的域适应方法,该方法可以分别测量内部依赖结构与边缘分布中的差异。通过优化它们之间的相对权重,新的正则化策略极大地放宽了现有方法的严格性。它允许学习机器特别关注差异最为关键的地方。在三个真实世界数据集上进行的实验表明,与各种基准域适应模型相比,改进相当显著且稳健。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验