• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过目标感知数据合成实现自监督跟踪

Self-Supervised Tracking via Target-Aware Data Synthesis.

作者信息

Li Xin, Pei Wenjie, Wang Yaowei, He Zhenyu, Lu Huchuan, Yang Ming-Hsuan

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):9186-9197. doi: 10.1109/TNNLS.2022.3231537. Epub 2024 Jul 10.

DOI:10.1109/TNNLS.2022.3231537
PMID:37018296
Abstract

While deep-learning-based tracking methods have achieved substantial progress, they entail large-scale and high-quality annotated data for sufficient training. To eliminate expensive and exhaustive annotation, we study self-supervised (SS) learning for visual tracking. In this work, we develop the crop-transform-paste operation, which is able to synthesize sufficient training data by simulating various appearance variations during tracking, including appearance variations of objects and background interference. Since the target state is known in all synthesized data, existing deep trackers can be trained in routine ways using the synthesized data without human annotation. The proposed target-aware data-synthesis method adapts existing tracking approaches within a SS learning framework without algorithmic changes. Thus, the proposed SS learning mechanism can be seamlessly integrated into existing tracking frameworks to perform training. Extensive experiments show that our method: 1) achieves favorable performance against supervised (Su) learning schemes under the cases with limited annotations; 2) helps deal with various tracking challenges such as object deformation, occlusion (OCC), or background clutter (BC) due to its manipulability; 3) performs favorably against the state-of-the-art unsupervised tracking methods; and 4) boosts the performance of various state-of-the-art Su learning frameworks, including SiamRPN++, DiMP, and TransT.

摘要

虽然基于深度学习的跟踪方法已经取得了显著进展,但它们需要大规模高质量的标注数据来进行充分训练。为了消除昂贵且详尽的标注,我们研究用于视觉跟踪的自监督(SS)学习。在这项工作中,我们开发了裁剪-变换-粘贴操作,该操作能够通过模拟跟踪过程中的各种外观变化来合成足够的训练数据,包括物体的外观变化和背景干扰。由于在所有合成数据中目标状态都是已知的,现有的深度跟踪器可以使用合成数据以常规方式进行训练,无需人工标注。所提出的目标感知数据合成方法在不改变算法的情况下,在SS学习框架内适配现有的跟踪方法。因此,所提出的SS学习机制可以无缝集成到现有的跟踪框架中进行训练。大量实验表明,我们的方法:1)在标注有限的情况下,相对于有监督(Su)学习方案取得了良好的性能;2)由于其可操作性,有助于应对各种跟踪挑战,如物体变形、遮挡(OCC)或背景杂乱(BC);3)相对于当前最先进的无监督跟踪方法表现良好;4)提升了包括SiamRPN++、DiMP和TransT在内的各种当前最先进的Su学习框架的性能。

相似文献

1
Self-Supervised Tracking via Target-Aware Data Synthesis.通过目标感知数据合成实现自监督跟踪
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):9186-9197. doi: 10.1109/TNNLS.2022.3231537. Epub 2024 Jul 10.
2
Context-Aware Correlation Filter Learning Toward Peak Strength for Visual Tracking.上下文感知相关滤波学习,以实现视觉跟踪的峰值强度。
IEEE Trans Cybern. 2021 Oct;51(10):5105-5115. doi: 10.1109/TCYB.2019.2935347. Epub 2021 Oct 12.
3
Self-Supervised Deep Correlation Tracking.自监督深度相关跟踪
IEEE Trans Image Process. 2021;30:976-985. doi: 10.1109/TIP.2020.3037518. Epub 2020 Dec 9.
4
Learning dual-margin model for visual tracking.学习用于视觉跟踪的双重边缘模型。
Neural Netw. 2021 Aug;140:344-354. doi: 10.1016/j.neunet.2021.04.004. Epub 2021 Apr 16.
5
Hierarchical Spatiotemporal Context-Aware Correlation Filters for Visual Tracking.分层时空上下文感知相关滤波器的视觉跟踪。
IEEE Trans Cybern. 2021 Dec;51(12):6066-6079. doi: 10.1109/TCYB.2020.2964757. Epub 2021 Dec 22.
6
Motion-Aware Correlation Filters for Online Visual Tracking.运动感知相关滤波器的在线视觉跟踪。
Sensors (Basel). 2018 Nov 14;18(11):3937. doi: 10.3390/s18113937.
7
A Weakly Supervised Learning Method for Cell Detection and Tracking Using Incomplete Initial Annotations.基于不完全初始标注的细胞检测和跟踪弱监督学习方法。
Int J Mol Sci. 2023 Nov 7;24(22):16028. doi: 10.3390/ijms242216028.
8
Discriminative Scale Space Tracking.判别尺度空间跟踪。
IEEE Trans Pattern Anal Mach Intell. 2017 Aug;39(8):1561-1575. doi: 10.1109/TPAMI.2016.2609928. Epub 2016 Sep 15.
9
Sparsely-Supervised Object Tracking.稀疏监督目标跟踪
IEEE Trans Image Process. 2024;33:3470-3485. doi: 10.1109/TIP.2024.3404257. Epub 2024 Jun 4.
10
SiamHYPER: Learning a Hyperspectral Object Tracker From an RGB-Based Tracker.暹罗超光谱跟踪器:从基于RGB的跟踪器学习高光谱目标跟踪器
IEEE Trans Image Process. 2022;31:7116-7129. doi: 10.1109/TIP.2022.3216995. Epub 2022 Nov 16.

引用本文的文献

1
Generalized Hierarchical Co-Saliency Learning for Label-Efficient Tracking.用于高效标签跟踪的广义分层协同显著性学习
Sensors (Basel). 2025 Jul 29;25(15):4691. doi: 10.3390/s25154691.
2
Self-Supervised Visual Tracking via Image Synthesis and Domain Adversarial Learning.通过图像合成和域对抗学习实现自监督视觉跟踪
Sensors (Basel). 2025 Jul 25;25(15):4621. doi: 10.3390/s25154621.