Suppr超能文献

通过深度矩阵分解实现连续多视图任务学习

Continual Multiview Task Learning via Deep Matrix Factorization.

作者信息

Sun Gan, Cong Yang, Zhang Yulun, Zhao Guoshuai, Fu Yun

出版信息

IEEE Trans Neural Netw Learn Syst. 2021 Jan;32(1):139-150. doi: 10.1109/TNNLS.2020.2977497. Epub 2021 Jan 4.

Abstract

The state-of-the-art multitask multiview (MTMV) learning tackles a scenario where multiple tasks are related to each other via multiple shared feature views. However, in many real-world scenarios where a sequence of the multiview task comes, the higher storage requirement and computational cost of retraining previous tasks with MTMV models have presented a formidable challenge for this lifelong learning scenario. To address this challenge, in this article, we propose a new continual multiview task learning model that integrates deep matrix factorization and sparse subspace learning in a unified framework, which is termed deep continual multiview task learning (DCMvTL). More specifically, as a new multiview task arrives, DCMvTL first adopts a deep matrix factorization technique to capture hidden and hierarchical representations for this new coming multiview task while accumulating the fresh multiview knowledge in a layerwise manner. Then, a sparse subspace learning model is employed for the extracted factors at each layer and further reveals cross-view correlations via a self-expressive constraint. For model optimization, we derive a general multiview learning formulation when a new multiview task comes and apply an alternating minimization strategy to achieve lifelong learning. Extensive experiments on benchmark data sets demonstrate the effectiveness of our proposed DCMvTL model compared with the existing state-of-the-art MTMV and lifelong multiview task learning models.

摘要

最先进的多任务多视图(MTMV)学习解决了一种场景,即多个任务通过多个共享特征视图相互关联。然而,在许多出现多视图任务序列的现实世界场景中,使用MTMV模型重新训练先前任务的更高存储要求和计算成本给这种终身学习场景带来了巨大挑战。为了应对这一挑战,在本文中,我们提出了一种新的连续多视图任务学习模型,该模型在一个统一框架中集成了深度矩阵分解和稀疏子空间学习,称为深度连续多视图任务学习(DCMvTL)。更具体地说,当一个新的多视图任务到来时,DCMvTL首先采用深度矩阵分解技术来捕获这个新到来的多视图任务的隐藏和分层表示,同时以分层方式积累新的多视图知识。然后,对每一层提取的因子采用稀疏子空间学习模型,并通过自表达约束进一步揭示跨视图相关性。对于模型优化,我们推导了一个新的多视图任务到来时的通用多视图学习公式,并应用交替最小化策略来实现终身学习。在基准数据集上进行的大量实验表明,与现有的最先进的MTMV和终身多视图任务学习模型相比,我们提出的DCMvTL模型是有效的。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验