Suppr超能文献

终身度量学习

Lifelong Metric Learning.

作者信息

Sun Gan, Yang Cong, Liu Ji, Liu Lianqing, Xu Xiaowei, Yu Haibin

出版信息

IEEE Trans Cybern. 2019 Aug;49(8):3168-3179. doi: 10.1109/TCYB.2018.2841046. Epub 2018 Jun 21.

Abstract

The state-of-the-art online learning approaches are only capable of learning the metric for predefined tasks. In this paper, we consider a lifelong learning problem to mimic "human learning," i.e., endowing a new capability to the learned metric for a new task from new online samples and incorporating the previous experiences. Therefore, we propose a new metric learning framework: lifelong metric learning (LML), which only utilizes the data of the new task to train the metric model while preserving the original capabilities. More specifically, the proposed LML maintains a common subspace for all learned metrics, named lifelong dictionary, transfers knowledge from the common subspace to learn each new metric learning task with task-specific idiosyncrasy, and redefines the common subspace over time to maximize performance across all metric tasks. For model optimization, we apply online passive aggressive optimization algorithm to achieve lifelong metric task learning, where the lifelong dictionary and task-specific partition are optimized alternatively and consecutively. Finally, we evaluate our approach by analyzing several multitask metric learning datasets. Extensive experimental results demonstrate effectiveness and efficiency of the proposed framework.

摘要

最先进的在线学习方法仅能够学习预定义任务的度量。在本文中,我们考虑一个终身学习问题以模仿“人类学习”,即从新的在线样本中为新任务赋予所学度量新的能力并纳入先前的经验。因此,我们提出了一种新的度量学习框架:终身度量学习(LML),它仅利用新任务的数据来训练度量模型,同时保留原始能力。更具体地说,所提出的LML为所有所学度量维护一个公共子空间,称为终身字典,从公共子空间转移知识以学习具有特定任务特质的每个新度量学习任务,并随着时间重新定义公共子空间以最大化所有度量任务的性能。对于模型优化,我们应用在线被动攻击优化算法来实现终身度量任务学习,其中终身字典和特定任务分区交替且连续地进行优化。最后,我们通过分析几个多任务度量学习数据集来评估我们的方法。广泛的实验结果证明了所提出框架的有效性和效率。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验