• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于自适应卷积特征和离线孪生跟踪器的鲁棒视觉跟踪

Robust Visual Tracking Based on Adaptive Convolutional Features and Offline Siamese Tracker.

机构信息

Academy of Astronautics, Northwestern Polytechnical University, YouYi Street, Xi'an 710072, China.

出版信息

Sensors (Basel). 2018 Jul 20;18(7):2359. doi: 10.3390/s18072359.

DOI:10.3390/s18072359
PMID:30036993
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6068628/
Abstract

Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. The existing spatially regularized discriminative correlation filter (SRDCF) method learns partial-target information or background information when experiencing rotation, out of view, and heavy occlusion. In order to reduce the computational complexity by creating a novel method to enhance tracking ability, we first introduce an adaptive dimensionality reduction technique to extract the features from the image, based on pre-trained VGG-Net. We then propose an adaptive model update to assign weights during an update procedure depending on the peak-to-sidelobe ratio. Finally, we combine the online SRDCF-based tracker with the offline Siamese tracker to accomplish long term tracking. Experimental results demonstrate that the proposed tracker has satisfactory performance in a wide range of challenging tracking scenarios.

摘要

鲁棒且精确的视觉跟踪是计算机视觉领域最具挑战性的问题之一。由于训练数据的固有缺乏,构建目标外观模型的稳健方法至关重要。现有的空间正则化判别相关滤波器 (SRDCF) 方法在经历旋转、不可见和严重遮挡时会学习到部分目标信息或背景信息。为了降低计算复杂度并提出一种新的方法来增强跟踪能力,我们首先引入了一种自适应降维技术,基于预训练的 VGG-Net 从图像中提取特征。然后,我们提出了一种自适应模型更新方法,在更新过程中根据峰值与旁瓣比来分配权重。最后,我们将基于在线 SRDCF 的跟踪器与离线的孪生跟踪器相结合,以实现长期跟踪。实验结果表明,所提出的跟踪器在广泛的挑战性跟踪场景中具有令人满意的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/059e97d50d5a/sensors-18-02359-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/3498c4410c0f/sensors-18-02359-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/f8644d3ac4f9/sensors-18-02359-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/4499486cc870/sensors-18-02359-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/fa1971116b0c/sensors-18-02359-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/647c442ff5a5/sensors-18-02359-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/0ff22e5760b1/sensors-18-02359-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/739eced5bf1d/sensors-18-02359-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/f74505c379b4/sensors-18-02359-g008a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/059e97d50d5a/sensors-18-02359-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/3498c4410c0f/sensors-18-02359-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/f8644d3ac4f9/sensors-18-02359-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/4499486cc870/sensors-18-02359-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/fa1971116b0c/sensors-18-02359-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/647c442ff5a5/sensors-18-02359-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/0ff22e5760b1/sensors-18-02359-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/739eced5bf1d/sensors-18-02359-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/f74505c379b4/sensors-18-02359-g008a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2dd6/6068628/059e97d50d5a/sensors-18-02359-g009.jpg

相似文献

1
Robust Visual Tracking Based on Adaptive Convolutional Features and Offline Siamese Tracker.基于自适应卷积特征和离线孪生跟踪器的鲁棒视觉跟踪
Sensors (Basel). 2018 Jul 20;18(7):2359. doi: 10.3390/s18072359.
2
Adaptive Correlation Model for Visual Tracking Using Keypoints Matching and Deep Convolutional Feature.基于关键点匹配和深度卷积特征的视觉跟踪自适应相关模型
Sensors (Basel). 2018 Feb 23;18(2):653. doi: 10.3390/s18020653.
3
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.基于卷积神经网络特征和自适应模型更新的ELDA跟踪器增强
Sensors (Basel). 2016 Apr 15;16(4):545. doi: 10.3390/s16040545.
4
Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters.基于核相关滤波器的多块与尺度空间有效视觉跟踪
Sensors (Basel). 2017 Feb 23;17(3):433. doi: 10.3390/s17030433.
5
Combined Kalman Filter and Multifeature Fusion Siamese Network for Real-Time Visual Tracking.用于实时视觉跟踪的卡尔曼滤波器与多特征融合暹罗网络相结合的方法
Sensors (Basel). 2019 May 13;19(9):2201. doi: 10.3390/s19092201.
6
SNS-CF: Siamese Network with Spatially Semantic Correlation Features for Object Tracking.SNS-CF:用于目标跟踪的具有空间语义相关特征的连体网络。
Sensors (Basel). 2020 Aug 28;20(17):4881. doi: 10.3390/s20174881.
7
SiamOT: An Improved Siamese Network with Online Training for Visual Tracking.SiamOT:一种带有在线训练的改进型孪生网络视觉跟踪算法。
Sensors (Basel). 2022 Sep 1;22(17):6597. doi: 10.3390/s22176597.
8
Local Semantic Siamese Networks for Fast Tracking.用于快速跟踪的局部语义孪生网络
IEEE Trans Image Process. 2019 Dec 17. doi: 10.1109/TIP.2019.2959256.
9
SiamATL: Online Update of Siamese Tracking Network via Attentional Transfer Learning.暹罗注意力转移学习网络:通过注意力转移学习对暹罗跟踪网络进行在线更新
IEEE Trans Cybern. 2022 Aug;52(8):7527-7540. doi: 10.1109/TCYB.2020.3043520. Epub 2022 Jul 19.
10
Improving Object Tracking by Added Noise and Channel Attention.添加噪声和通道注意力以改进目标跟踪。
Sensors (Basel). 2020 Jul 6;20(13):3780. doi: 10.3390/s20133780.

引用本文的文献

1
Proposal-Based Visual Tracking Using Spatial Cascaded Transformed Region Proposal Network.基于提案的视觉跟踪:使用空间级联变换区域提案网络。
Sensors (Basel). 2020 Aug 26;20(17):4810. doi: 10.3390/s20174810.

本文引用的文献

1
Dimensionality Reduction on SPD Manifolds: The Emergence of Geometry-Aware Methods.SPD 流形上的降维:几何感知方法的出现。
IEEE Trans Pattern Anal Mach Intell. 2018 Jan;40(1):48-62. doi: 10.1109/TPAMI.2017.2655048. Epub 2017 Jan 18.
2
Robust Visual Tracking via Convolutional Networks Without Training.基于卷积网络的无需训练的鲁棒视觉跟踪。
IEEE Trans Image Process. 2016 Apr;25(4):1779-92. doi: 10.1109/TIP.2016.2531283. Epub 2016 Feb 18.
3
DeepTrack: Learning Discriminative Feature Representations Online for Robust Visual Tracking.
DeepTrack:在线学习判别特征表示以实现鲁棒视觉跟踪。
IEEE Trans Image Process. 2016 Apr;25(4):1834-48. doi: 10.1109/TIP.2015.2510583. Epub 2015 Dec 22.
4
Visual Object Tracking Performance Measures Revisited.视觉目标跟踪性能度量的再探讨。
IEEE Trans Image Process. 2016 Mar;25(3):1261-74. doi: 10.1109/TIP.2016.2520370.
5
Visual Tracking: An Experimental Survey.视觉跟踪:实验综述。
IEEE Trans Pattern Anal Mach Intell. 2014 Jul;36(7):1442-68. doi: 10.1109/TPAMI.2013.230.
6
High-Speed Tracking with Kernelized Correlation Filters.基于核相关滤波器的高速跟踪。
IEEE Trans Pattern Anal Mach Intell. 2015 Mar;37(3):583-96. doi: 10.1109/TPAMI.2014.2345390.
7
Object Tracking Benchmark.目标跟踪基准测试。
IEEE Trans Pattern Anal Mach Intell. 2015 Sep;37(9):1834-48. doi: 10.1109/TPAMI.2014.2388226.
8
Zero-Aliasing Correlation Filters for Object Recognition.零混淆相关滤波器用于目标识别。
IEEE Trans Pattern Anal Mach Intell. 2015 Aug;37(8):1702-15. doi: 10.1109/TPAMI.2014.2375215.
9
Robust Online Learned Spatio-Temporal Context Model for Visual Tracking.用于视觉跟踪的鲁棒在线学习时空上下文模型。
IEEE Trans Image Process. 2014 Feb;23(2):785-96. doi: 10.1109/TIP.2013.2293430.
10
Tracking-Learning-Detection.跟踪-学习-检测。
IEEE Trans Pattern Anal Mach Intell. 2012 Jul;34(7):1409-22. doi: 10.1109/TPAMI.2011.239. Epub 2011 Dec 13.