• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

学习用于视觉跟踪的双重边缘模型。

Learning dual-margin model for visual tracking.

机构信息

School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China.

School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China; Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen, China.

出版信息

Neural Netw. 2021 Aug;140:344-354. doi: 10.1016/j.neunet.2021.04.004. Epub 2021 Apr 16.

DOI:10.1016/j.neunet.2021.04.004
PMID:33930720
Abstract

Existing trackers usually exploit robust features or online updating mechanisms to deal with target variations which is a key challenge in visual tracking. However, the features being robust to variations remain little spatial information, and existing online updating methods are prone to overfitting. In this paper, we propose a dual-margin model for robust and accurate visual tracking. The dual-margin model comprises an intra-object margin between different target appearances and an inter-object margin between the target and the background. The proposed method is able to not only distinguish the target from the background but also perceive the target changes, which tracks target appearance changing and facilitates accurate target state estimation. In addition, to exploit rich off-line video data and learn general rules of target appearance variations, we train the dual-margin model on a large off-line video dataset. We perform tracking under a Siamese framework using the constructed appearance set as templates. The proposed method achieves accurate and robust tracking performance on five public datasets while running in real-time. The favorable performance against the state-of-the-art methods demonstrates the effectiveness of the proposed algorithm.

摘要

现有的跟踪器通常利用鲁棒的特征或在线更新机制来处理目标变化,这是视觉跟踪的一个关键挑战。然而,对变化具有鲁棒性的特征仍然保留很少的空间信息,并且现有的在线更新方法容易出现过拟合。在本文中,我们提出了一种用于鲁棒和准确视觉跟踪的双边缘模型。双边缘模型包括不同目标外观之间的内部目标边缘和目标与背景之间的外部目标边缘。所提出的方法不仅能够区分目标和背景,还能够感知目标变化,从而跟踪目标外观变化并促进准确的目标状态估计。此外,为了利用丰富的离线视频数据并学习目标外观变化的一般规则,我们在大型离线视频数据集上训练双边缘模型。我们使用构建的外观集作为模板在 Siamese 框架下进行跟踪。所提出的方法在五个公共数据集上实现了准确和鲁棒的跟踪性能,同时实时运行。与最先进的方法相比,所提出的算法的良好性能证明了其有效性。

相似文献

1
Learning dual-margin model for visual tracking.学习用于视觉跟踪的双重边缘模型。
Neural Netw. 2021 Aug;140:344-354. doi: 10.1016/j.neunet.2021.04.004. Epub 2021 Apr 16.
2
Nonlinear dynamic model for visual object tracking on Grassmann manifolds with partial occlusion handling.基于 Grassmann 流形的部分遮挡处理的视觉目标跟踪的非线性动态模型。
IEEE Trans Cybern. 2013 Dec;43(6):2005-19. doi: 10.1109/TSMCB.2013.2237900.
3
Siamese Regression Tracking With Reinforced Template Updating.基于强化模板更新的暹罗回归跟踪
IEEE Trans Image Process. 2021;30:628-640. doi: 10.1109/TIP.2020.3036723. Epub 2020 Dec 4.
4
Learning adaptive metric for robust visual tracking.学习用于鲁棒视觉跟踪的自适应度量。
IEEE Trans Image Process. 2011 Aug;20(8):2288-300. doi: 10.1109/TIP.2011.2114895. Epub 2011 Feb 17.
5
BIT: Biologically Inspired Tracker.BIT:生物启发式跟踪器。
IEEE Trans Image Process. 2016 Mar;25(3):1327-39. doi: 10.1109/TIP.2016.2520358.
6
Dual-regression model for visual tracking.基于回归的视觉跟踪模型。
Neural Netw. 2020 Dec;132:364-374. doi: 10.1016/j.neunet.2020.09.011. Epub 2020 Sep 24.
7
Hedging Deep Features for Visual Tracking.基于深度特征的视觉跟踪的套期保值。
IEEE Trans Pattern Anal Mach Intell. 2019 May;41(5):1116-1130. doi: 10.1109/TPAMI.2018.2828817. Epub 2018 Apr 20.
8
Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism.通过门控机制进行鲁棒跟踪的联合特征学习与选择
PLoS One. 2016 Aug 30;11(8):e0161808. doi: 10.1371/journal.pone.0161808. eCollection 2016.
9
Patch-based adaptive weighting with segmentation and scale (PAWSS) for visual tracking in surgical video.用于手术视频视觉跟踪的基于补丁的带分割和尺度的自适应加权(PAWSS)
Med Image Anal. 2019 Oct;57:120-135. doi: 10.1016/j.media.2019.07.002. Epub 2019 Jul 4.
10
SiamATL: Online Update of Siamese Tracking Network via Attentional Transfer Learning.暹罗注意力转移学习网络:通过注意力转移学习对暹罗跟踪网络进行在线更新
IEEE Trans Cybern. 2022 Aug;52(8):7527-7540. doi: 10.1109/TCYB.2020.3043520. Epub 2022 Jul 19.

引用本文的文献

1
SGAT: Shuffle and graph attention based Siamese networks for visual tracking.基于 Shuffle 和图注意力的孪生网络的视觉跟踪。
PLoS One. 2022 Nov 23;17(11):e0277064. doi: 10.1371/journal.pone.0277064. eCollection 2022.
2
Antiocclusion Visual Tracking Algorithm Combining Fully Convolutional Siamese Network and Correlation Filtering.基于全卷积孪生网络和相关滤波的抗遮挡视觉跟踪算法。
Comput Intell Neurosci. 2022 Aug 9;2022:8051876. doi: 10.1155/2022/8051876. eCollection 2022.