Gao Junyu, Zhang Tianzhu, Yang Xiaoshan, Xu Changsheng
IEEE Trans Image Process. 2017 Apr;26(4):1845-1858. doi: 10.1109/TIP.2017.2656628. Epub 2017 Jan 20.
Most existing tracking methods are direct trackers, which directly exploit foreground or/and background information for object appearance modeling and decide whether an image patch is target object or not. As a result, these trackers cannot perform well when target appearance changes heavily and becomes different from its model. To deal with this issue, we propose a novel relative tracker, which can effectively exploit the relative relationship among image patches from both foreground and background for object appearance modeling. Different from direct trackers, the proposed relative tracker is robust to localize target object by use of the best image patch with the highest relative score to target appearance model. To model relative relationship among large-scale image patch pairs, we propose a novel and effective deep relative learning algorithm via Convolutional Neural Network. We test the proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that our method consistently outperforms state-of-the-art trackers due to the powerful capacity of the proposed deep relative model.
大多数现有的跟踪方法都是直接跟踪器,它们直接利用前景或/和背景信息进行目标外观建模,并判断一个图像块是否为目标对象。因此,当目标外观发生严重变化并与模型不同时,这些跟踪器无法很好地工作。为了解决这个问题,我们提出了一种新颖的相对跟踪器,它可以有效地利用前景和背景中图像块之间的相对关系进行目标外观建模。与直接跟踪器不同,所提出的相对跟踪器通过使用与目标外观模型具有最高相对分数的最佳图像块来定位目标对象,具有很强的鲁棒性。为了对大规模图像块对之间的相对关系进行建模,我们通过卷积神经网络提出了一种新颖有效的深度相对学习算法。我们在具有严重遮挡、剧烈光照变化和大姿态变化的具有挑战性的序列上测试了所提出的方法。实验结果表明,由于所提出的深度相对模型的强大能力,我们的方法始终优于现有最先进的跟踪器。