• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于鲁棒目标跟踪的多特征在线分层稀疏表示

Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking.

作者信息

Yang Honghong, Qu Shiru

机构信息

Department of Automation, Northwestern Polytechnical University, Xi'an 710072, China.

出版信息

Comput Intell Neurosci. 2016;2016:5894639. doi: 10.1155/2016/5894639. Epub 2016 Aug 18.

DOI:10.1155/2016/5894639
PMID:27630710
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5008034/
Abstract

Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness.

摘要

近年来,基于稀疏表示的目标跟踪取得了令人瞩目的跟踪结果。然而,稀疏表示框架下的跟踪器总是过度强调稀疏表示,而忽略了视觉信息的相关性。此外,稀疏编码方法仅独立地对局部区域进行编码,而忽略了图像的空间邻域信息。在本文中,我们提出了一种鲁棒的跟踪算法。首先,使用多个互补特征来描述目标外观;通过瞬时和稳定的外观特征同时对跟踪目标的外观模型进行建模。采用一种考虑图像块空间邻域信息和计算负担的两阶段稀疏编码方法来计算重建的目标外观。然后,通过瞬态和重建外观模型的跟踪似然函数来衡量每个跟踪器的可靠性。最后,通过一个成熟的粒子滤波器框架获得最可靠的跟踪器;基于当前跟踪结果对训练集和模板库进行增量更新。在不同具有挑战性的视频序列上的实验结果表明,所提出的算法具有良好的跟踪精度和鲁棒性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/19bca624f0f7/CIN2016-5894639.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/242770624320/CIN2016-5894639.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/058744e6c69a/CIN2016-5894639.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/40172e29edb2/CIN2016-5894639.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/1bee3c104577/CIN2016-5894639.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/034b214076b5/CIN2016-5894639.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/19bca624f0f7/CIN2016-5894639.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/242770624320/CIN2016-5894639.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/058744e6c69a/CIN2016-5894639.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/40172e29edb2/CIN2016-5894639.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/1bee3c104577/CIN2016-5894639.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/034b214076b5/CIN2016-5894639.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e976/5008034/19bca624f0f7/CIN2016-5894639.006.jpg

相似文献

1
Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking.用于鲁棒目标跟踪的多特征在线分层稀疏表示
Comput Intell Neurosci. 2016;2016:5894639. doi: 10.1155/2016/5894639. Epub 2016 Aug 18.
2
Discriminative object tracking via sparse representation and online dictionary learning.基于稀疏表示和在线字典学习的判别式目标跟踪。
IEEE Trans Cybern. 2014 Apr;44(4):539-53. doi: 10.1109/TCYB.2013.2259230. Epub 2013 May 31.
3
Incremental learning of 3D-DCT compact representations for robust visual tracking.用于鲁棒视觉跟踪的 3D-DCT 紧凑表示的增量学习。
IEEE Trans Pattern Anal Mach Intell. 2013 Apr;35(4):863-81. doi: 10.1109/TPAMI.2012.166.
4
Robust object tracking via online dynamic spatial bias appearance models.通过在线动态空间偏差外观模型实现鲁棒目标跟踪
IEEE Trans Pattern Anal Mach Intell. 2007 Dec;29(12):2157-69. doi: 10.1109/TPAMI.2007.1134.
5
Sparse Coding and Counting for Robust Visual Tracking.用于鲁棒视觉跟踪的稀疏编码与计数
PLoS One. 2016 Dec 16;11(12):e0168093. doi: 10.1371/journal.pone.0168093. eCollection 2016.
6
Tracking by third-order tensor representation.基于三阶张量表示的跟踪
IEEE Trans Syst Man Cybern B Cybern. 2011 Apr;41(2):385-96. doi: 10.1109/TSMCB.2010.2056366. Epub 2010 Aug 16.
7
Robust object tracking based on local discriminative sparse representation.基于局部判别稀疏表示的鲁棒目标跟踪
J Opt Soc Am A Opt Image Sci Vis. 2017 Apr 1;34(4):533-544. doi: 10.1364/JOSAA.34.000533.
8
Object Tracking Based On Huber Loss Function.基于Huber损失函数的目标跟踪
Vis Comput. 2019 Nov;35(11):1641-1654. doi: 10.1007/s00371-018-1563-1. Epub 2018 May 24.
9
Nonlinear dynamic model for visual object tracking on Grassmann manifolds with partial occlusion handling.基于 Grassmann 流形的部分遮挡处理的视觉目标跟踪的非线性动态模型。
IEEE Trans Cybern. 2013 Dec;43(6):2005-19. doi: 10.1109/TSMCB.2013.2237900.
10
Visual Object Tracking Using Structured Sparse PCA-Based Appearance Representation and Online Learning.基于结构化稀疏 PCA 的表观表示和在线学习的视觉目标跟踪。
Sensors (Basel). 2018 Oct 18;18(10):3513. doi: 10.3390/s18103513.

本文引用的文献

1
Interacting Multiview Tracker.交互多视图跟踪器。
IEEE Trans Pattern Anal Mach Intell. 2016 May;38(5):903-17. doi: 10.1109/TPAMI.2015.2473862. Epub 2015 Aug 27.
2
Efficient minimum error bounded particle resampling L1 tracker with occlusion detection.具有遮挡检测的高效最小误差有界粒子重采样 L1 跟踪器。
IEEE Trans Image Process. 2013 Jul;22(7):2661-75. doi: 10.1109/TIP.2013.2255301. Epub 2013 Mar 28.
3
Robust visual tracking and vehicle classification via sparse representation.基于稀疏表示的鲁棒视觉跟踪与车辆分类。
IEEE Trans Pattern Anal Mach Intell. 2011 Nov;33(11):2259-72. doi: 10.1109/TPAMI.2011.66.
4
Context-aware visual tracking.上下文感知视觉跟踪
IEEE Trans Pattern Anal Mach Intell. 2009 Jul;31(7):1195-209. doi: 10.1109/TPAMI.2008.146.
5
Dependent multiple cue integration for robust tracking.用于稳健跟踪的依赖多线索整合
IEEE Trans Pattern Anal Mach Intell. 2008 Apr;30(4):670-85. doi: 10.1109/TPAMI.2007.70727.