School of Computer Science, Wuhan University, Wuhuan 430072, China.
CSIRO's Data61, Sydney, NSW 2122, Australia.
Neural Netw. 2022 Apr;148:206-218. doi: 10.1016/j.neunet.2022.01.010. Epub 2022 Jan 21.
Object tracking by the Siamese network has gained its popularity for its outstanding performance and considerable potential. However, most of the existing Siamese architectures are faced with great difficulties when it comes to the scenes where the target is going through dramatic shape or environmental changes. In this work, we proposed a novel and concise generative adversarial learning method to solve the problem especially when the target is going under drastic changes of appearance, illumination variations and background clutters. We consider the above situations as distractors for tracking and joint a distractor generator into the traditional Siamese network. The component can simulate these distractors, and more robust tracking performance is achieved by eliminating the distractors from the input instance search image. Besides, we use the generalized intersection over union (GIoU) as our training loss. GIoU is a more strict metric for the bounding box regression compared to the traditional IoU, which can be used as training loss for more accurate tracking results. Experiments on five challenging benchmarks have shown favorable and state-of-the-art results against other trackers in different aspects.
孪生网络的目标跟踪因其出色的性能和巨大的潜力而受到广泛关注。然而,现有的大多数孪生网络架构在目标经历剧烈形状或环境变化的场景中都面临着巨大的困难。在这项工作中,我们提出了一种新颖而简洁的生成对抗学习方法来解决这个问题,特别是当目标经历明显的外观变化、光照变化和背景干扰时。我们将上述情况视为跟踪的干扰因素,并将干扰生成器加入到传统的孪生网络中。该组件可以模拟这些干扰因素,并通过从输入实例搜索图像中消除干扰因素来实现更稳健的跟踪性能。此外,我们使用广义交并比(GIoU)作为我们的训练损失。与传统的 IoU 相比,GIoU 是一种更严格的边界框回归度量标准,可用于训练更准确的跟踪结果。在五个具有挑战性的基准上的实验表明,与其他跟踪器相比,在不同方面都取得了有利的和最先进的结果。