Suppr超能文献

用于鲁棒视觉跟踪的在线优化自适应低秩子空间学习

Adaptive low-rank subspace learning with online optimization for robust visual tracking.

作者信息

Liu Risheng, Wang Di, Han Yuzhuo, Fan Xin, Luo Zhongxuan

机构信息

School of Software Technology, Dalian University of Technology, Dalian, 116024, China; The State Key Laboratory of Integrated Services Networks, Xidian University, Xian, 710071, China; Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, Dalian University of Technology, Dalian, 116024, China.

School of Mathematical Sciences, Dalian University of Technology, Dalian, 116024, China.

出版信息

Neural Netw. 2017 Apr;88:90-104. doi: 10.1016/j.neunet.2017.02.002. Epub 2017 Feb 10.

Abstract

In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods.

摘要

近年来,稀疏和低秩模型已被广泛用于构建视觉跟踪的外观子空间。然而,大多数现有方法仅考虑系数的稀疏性或低秩性,这对于在复杂视频序列上进行外观子空间学习是不够的。此外,由于低秩和列稀疏度量都与序列中的所有样本紧密相关,因此在顺序获取的视频数据上增量求解具有核范数和列稀疏范数的优化问题具有挑战性。为了解决上述限制,本文针对基于子空间的鲁棒视觉跟踪开发了一种新颖的具有自适应惩罚的低秩子空间学习(LSAP)框架。与以往通常简单地将观测分解为低秩特征和稀疏误差的工作不同,LSAP同时学习子空间基、低秩系数和列稀疏误差来构建外观子空间。在LSAP框架内,我们引入基于哈达玛积的正则化,以纳入丰富的生成/判别结构约束,从而自适应地惩罚用于子空间学习的系数。结果表明,这种自适应惩罚可以显著提高LSAP在严重损坏数据集上的鲁棒性。为了将LSAP用于在线视觉跟踪,我们还开发了一种用于核范数和列稀疏范数最小化的高效增量优化方案。在50个具有挑战性的视频序列上进行的实验表明,我们的跟踪器优于其他现有最先进的方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验