Li Yuankun, Xu Tingfa, Deng Honggao, Shi Guokai, Guo Jie
School of Optics and Photonics, Image Engineering & Video Technology Lab, Beijing Institute of Technology, Beijing 100081, China.
Key Laboratory of Photoelectronic Imaging Technology and System, Ministry of Education of China, Beijing 100081, China.
Sensors (Basel). 2018 Feb 23;18(2):653. doi: 10.3390/s18020653.
Although correlation filter (CF)-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN) to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.
尽管基于相关滤波器(CF)的视觉跟踪算法取得了不错的成果,但仍有一些问题有待解决。当目标物体经历长期遮挡或尺度变化时,现有基于CF的算法中使用的相关模型将不可避免地学习到一些非目标信息或部分目标信息。为了避免模型污染并增强模型更新的适应性,我们引入了关键点匹配策略,并根据匹配分数动态调整模型学习率。此外,所提出的方法从深度卷积神经网络(DCNN)中提取卷积特征,以准确估计目标的位置和尺度。实验结果表明,所提出的跟踪器在广泛的具有挑战性的跟踪场景中取得了令人满意的性能。