• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

迁移视觉先验进行在线目标跟踪。

Transferring visual prior for online object tracking.

机构信息

National Laboratory for Information Science and Technology, Department of Automation, Tsinghua University, Beijing 100084, China.

出版信息

IEEE Trans Image Process. 2012 Jul;21(7):3296-305. doi: 10.1109/TIP.2012.2190085. Epub 2012 Apr 5.

DOI:10.1109/TIP.2012.2190085
PMID:22491081
Abstract

Visual prior from generic real-world images can be learned and transferred for representing objects in a scene. Motivated by this, we propose an algorithm that transfers visual prior learned offline for online object tracking. From a collection of real-world images, we learn an overcomplete dictionary to represent visual prior. The prior knowledge of objects is generic, and the training image set does not necessarily contain any observation of the target object. During the tracking process, the learned visual prior is transferred to construct an object representation by sparse coding and multiscale max pooling. With this representation, a linear classifier is learned online to distinguish the target from the background and to account for the target and background appearance variations over time. Tracking is then carried out within a Bayesian inference framework, in which the learned classifier is used to construct the observation model and a particle filter is used to estimate the tracking result sequentially. Experiments on a variety of challenging sequences with comparisons to several state-of-the-art methods demonstrate that more robust object tracking can be achieved by transferring visual prior.

摘要

可以学习和转移来自通用真实世界图像的视觉先验,以表示场景中的对象。受此启发,我们提出了一种算法,用于在线对象跟踪中转移离线学习的视觉先验。从一组真实世界的图像中,我们学习了一个过完备字典来表示视觉先验。对象的先验知识是通用的,并且训练图像集不一定包含目标对象的任何观察结果。在跟踪过程中,学习到的视觉先验被转移到稀疏编码和多尺度最大池化中,以构建对象表示。通过这种表示,在线学习线性分类器来区分目标和背景,并考虑目标和背景随时间的外观变化。然后在贝叶斯推断框架内进行跟踪,其中学习的分类器用于构建观测模型,而粒子滤波器用于顺序估计跟踪结果。在具有与几种最先进方法进行比较的各种挑战性序列上的实验表明,通过转移视觉先验可以实现更鲁棒的对象跟踪。

相似文献

1
Transferring visual prior for online object tracking.迁移视觉先验进行在线目标跟踪。
IEEE Trans Image Process. 2012 Jul;21(7):3296-305. doi: 10.1109/TIP.2012.2190085. Epub 2012 Apr 5.
2
Online object tracking with sparse prototypes.基于稀疏原型的在线目标跟踪。
IEEE Trans Image Process. 2013 Jan;22(1):314-25. doi: 10.1109/TIP.2012.2202677. Epub 2012 Jun 5.
3
Learning local appearances with sparse representation for robust and fast visual tracking.基于稀疏表示学习局部外观特征的鲁棒快速视觉跟踪
IEEE Trans Cybern. 2015 Apr;45(4):663-75. doi: 10.1109/TCYB.2014.2332279. Epub 2014 Jul 10.
4
Object tracking via partial least squares analysis.基于偏最小二乘法的目标跟踪。
IEEE Trans Image Process. 2012 Oct;21(10):4454-65. doi: 10.1109/TIP.2012.2205700. Epub 2012 Jun 22.
5
Tracking by third-order tensor representation.基于三阶张量表示的跟踪
IEEE Trans Syst Man Cybern B Cybern. 2011 Apr;41(2):385-96. doi: 10.1109/TSMCB.2010.2056366. Epub 2010 Aug 16.
6
Quantifying and transferring contextual information in object detection.量化和转移目标检测中的上下文信息。
IEEE Trans Pattern Anal Mach Intell. 2012 Apr;34(4):762-77. doi: 10.1109/TPAMI.2011.164.
7
Discriminative object tracking via sparse representation and online dictionary learning.基于稀疏表示和在线字典学习的判别式目标跟踪。
IEEE Trans Cybern. 2014 Apr;44(4):539-53. doi: 10.1109/TCYB.2013.2259230. Epub 2013 May 31.
8
Robust visual tracking and vehicle classification via sparse representation.基于稀疏表示的鲁棒视觉跟踪与车辆分类。
IEEE Trans Pattern Anal Mach Intell. 2011 Nov;33(11):2259-72. doi: 10.1109/TPAMI.2011.66.
9
Video tracking using learned hierarchical features.基于学习的分层特征的视频跟踪。
IEEE Trans Image Process. 2015 Apr;24(4):1424-35. doi: 10.1109/TIP.2015.2403231. Epub 2015 Feb 12.
10
Robust face tracking via collaboration of generic and specific models.通过通用模型和特定模型协作实现稳健的面部跟踪。
IEEE Trans Image Process. 2008 Jul;17(7):1189-99. doi: 10.1109/TIP.2008.924287.

引用本文的文献

1
Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.卷积深度置信网络在计算生物学和计算机视觉中单细胞/目标跟踪中的应用。
Biomed Res Int. 2016;2016:9406259. doi: 10.1155/2016/9406259. Epub 2016 Oct 26.