Suppr超能文献

通过互信息最小化学习领域独立的深度表示。

Learning Domain-Independent Deep Representations by Mutual Information Minimization.

机构信息

College of Mathematics, Sichuan University, Chengdu 610065, China.

College of Cybersecurity, Sichuan University, Chengdu 610065, China.

出版信息

Comput Intell Neurosci. 2019 Jun 16;2019:9414539. doi: 10.1155/2019/9414539. eCollection 2019.

Abstract

Domain transfer learning aims to learn common data representations from a source domain and a target domain so that the source domain data can help the classification of the target domain. Conventional transfer representation learning imposes the distributions of source and target domain representations to be similar, which heavily relies on the characterization of the distributions of domains and the distribution matching criteria. In this paper, we proposed a novel framework for domain transfer representation learning. Our motive is to make the learned representations of data points independent from the domains which they belong to. In other words, from an optimal cross-domain representation of a data point, it is difficult to tell which domain it is from. In this way, the learned representations can be generalized to different domains. To measure the dependency between the representations and the corresponding domain which the data points belong to, we propose to use the mutual information between the representations and the domain-belonging indicators. By minimizing such mutual information, we learn the representations which are independent from domains. We build a classwise deep convolutional network model as a representation model and maximize the margin of each data point of the corresponding class, which is defined over the intraclass and interclass neighborhood. To learn the parameters of the model, we construct a unified minimization problem where the margins are maximized while the representation-domain mutual information is minimized. In this way, we learn representations which are not only discriminate but also independent from domains. An iterative algorithm based on the Adam optimization method is proposed to solve the minimization to learn the classwise deep model parameters and the cross-domain representations simultaneously. Extensive experiments over benchmark datasets show its effectiveness and advantage over existing domain transfer learning methods.

摘要

域迁移学习旨在从源域和目标域学习通用的数据表示,以便源域数据能够帮助目标域的分类。传统的迁移表示学习假设源域和目标域的表示分布相似,这严重依赖于域分布的特征化和分布匹配标准。在本文中,我们提出了一种新的域迁移表示学习框架。我们的动机是使数据点的学习表示独立于它们所属的域。换句话说,从数据点的最优跨域表示中,很难判断它来自哪个域。通过这种方式,学习到的表示可以推广到不同的域。为了衡量表示与数据点所属的相应域之间的依赖性,我们建议使用表示与域归属指示符之间的互信息来度量。通过最小化这种互信息,我们学习到与域无关的表示。我们构建了一个类别的深度卷积网络模型作为表示模型,并最大化每个对应类别的数据点的边界,该边界是在类内和类间邻域上定义的。为了学习模型的参数,我们构建了一个统一的最小化问题,在这个问题中,最大化边界的同时最小化表示-域互信息。通过这种方式,我们学习到的表示不仅具有判别性,而且与域无关。提出了一种基于 Adam 优化方法的迭代算法来解决最小化问题,以同时学习类别的深度模型参数和跨域表示。在基准数据集上的广泛实验表明了它的有效性和优于现有域迁移学习方法的优势。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c98/6604496/26fb587fef08/CIN2019-9414539.001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验