Suppr超能文献

任务特征协同学习及其在个性化属性预测中的应用。

Task-Feature Collaborative Learning with Application to Personalized Attribute Prediction.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2021 Nov;43(11):4094-4110. doi: 10.1109/TPAMI.2020.2991344. Epub 2021 Oct 1.

Abstract

As an effective learning paradigm against insufficient training samples, multi-task learning (MTL) encourages knowledge sharing across multiple related tasks so as to improve the overall performance. In MTL, a major challenge springs from the phenomenon that sharing the knowledge with dissimilar and hard tasks, known as negative transfer, often results in a worsened performance. Though a substantial amount of studies have been carried out against the negative transfer, most of the existing methods only model the transfer relationship as task correlations, with the transfer across features and tasks left unconsidered. Different from the existing methods, our goal is to alleviate negative transfer collaboratively across features and tasks. To this end, we propose a novel multi-task learning method called task-feature collaborative learning (TFCL). Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks and suppressing inter-group knowledge sharing. We then propose an optimization method for the model. Extensive theoretical analysis shows that our proposed method has the following benefits: (a) it enjoys the global convergence property and (b) it provides a block-diagonal structure recovery guarantee. As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks. We further apply it to the personalized attribute prediction problem with fine-grained modeling of user behaviors. Finally, experimental results on both simulated dataset and real-world datasets demonstrate the effectiveness of our proposed method.

摘要

作为一种针对训练样本不足的有效学习范例,多任务学习(MTL)鼓励在多个相关任务之间共享知识,以提高整体性能。在 MTL 中,一个主要的挑战源于这样一个现象:与不相似和困难的任务共享知识,即负迁移,通常会导致性能下降。尽管已经进行了大量的研究来对抗负迁移,但大多数现有的方法仅将转移关系建模为任务相关性,而忽略了特征和任务之间的转移。与现有方法不同,我们的目标是在特征和任务之间协同减轻负迁移。为此,我们提出了一种名为任务-特征协同学习(TFCL)的新多任务学习方法。具体来说,我们首先提出了一个具有异构块对角结构正则化器的基础模型,以利用特征和任务的协同分组,并抑制组间知识共享。然后,我们提出了一种用于该模型的优化方法。广泛的理论分析表明,我们提出的方法具有以下优点:(a)它具有全局收敛性;(b)它提供了块对角结构恢复保证。作为实际的扩展,我们通过允许重叠特征和区分困难任务来扩展基础模型。我们进一步将其应用于具有用户行为细粒度建模的个性化属性预测问题。最后,在模拟数据集和真实数据集上的实验结果证明了我们提出的方法的有效性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验