Suppr超能文献

半监督多任务学习

Semisupervised multitask learning.

作者信息

Liu Qiuhua, Liao Xuejun, Carin Hui Li, Stack Jason R, Carin Lawrence

机构信息

Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708-0291, USA.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2009 Jun;31(6):1074-86. doi: 10.1109/TPAMI.2008.296.

Abstract

Context plays an important role when performing classification, and in this paper we examine context from two perspectives. First, the classification of items within a single task is placed within the context of distinct concurrent or previous classification tasks (multiple distinct data collections). This is referred to as multi-task learning (MTL), and is implemented here in a statistical manner, using a simplified form of the Dirichlet process. In addition, when performing many classification tasks one has simultaneous access to all unlabeled data that must be classified, and therefore there is an opportunity to place the classification of any one feature vector within the context of all unlabeled feature vectors; this is referred to as semi-supervised learning. In this paper we integrate MTL and semi-supervised learning into a single framework, thereby exploiting two forms of contextual information. Example results are presented on a "toy" example, to demonstrate the concept, and the algorithm is also applied to three real data sets.

摘要

在进行分类时,上下文起着重要作用,在本文中我们从两个角度审视上下文。首先,将单个任务中的项目分类置于不同并发或先前分类任务(多个不同数据集合)的上下文中。这被称为多任务学习(MTL),并在此以统计方式实现,使用狄利克雷过程的简化形式。此外,在执行许多分类任务时,可以同时访问所有必须分类的未标记数据,因此有机会将任何一个特征向量的分类置于所有未标记特征向量的上下文中;这被称为半监督学习。在本文中,我们将MTL和半监督学习集成到一个单一框架中,从而利用两种形式的上下文信息。在一个“玩具”示例上展示了示例结果以说明该概念,并且该算法也应用于三个真实数据集。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验