Zheng Wei-Shi, Gong Shaogang, Xiang Tao
IEEE Trans Pattern Anal Mach Intell. 2016 Mar;38(3):591-606. doi: 10.1109/TPAMI.2015.2453984.
Solving the problem of matching people across non-overlapping multi-camera views, known as person re-identification (re-id), has received increasing interests in computer vision. In a real-world application scenario, a watch-list (gallery set) of a handful of known target people are provided with very few (in many cases only a single) image(s) (shots) per target. Existing re-id methods are largely unsuitable to address this open-world re-id challenge because they are designed for (1) a closed-world scenario where the gallery and probe sets are assumed to contain exactly the same people, (2) person-wise identification whereby the model attempts to verify exhaustively against each individual in the gallery set, and (3) learning a matching model using multi-shots. In this paper, a novel transfer local relative distance comparison (t-LRDC) model is formulated to address the open-world person re-identification problem by one-shot group-based verification. The model is designed to mine and transfer useful information from a labelled open-world non-target dataset. Extensive experiments demonstrate that the proposed approach outperforms both non-transfer learning and existing transfer learning based re-id methods.
解决跨非重叠多摄像机视图匹配人员的问题,即行人重识别(re-id),在计算机视觉领域受到了越来越多的关注。在实际应用场景中,会提供一份包含少数已知目标人员的观察名单(图库集),每个目标只有很少的图像(镜头)(很多情况下只有一张)。现有的重识别方法在很大程度上不适用于应对这种开放世界重识别挑战,因为它们是为以下情况设计的:(1)封闭世界场景,假设图库集和探测集包含完全相同的人员;(2)逐人识别,即模型试图对图库集中的每个个体进行详尽验证;(3)使用多镜头学习匹配模型。在本文中,提出了一种新颖的转移局部相对距离比较(t-LRDC)模型,通过基于单镜头组的验证来解决开放世界行人重识别问题。该模型旨在从有标签的开放世界非目标数据集中挖掘和转移有用信息。大量实验表明,所提出的方法优于非转移学习和现有的基于转移学习的重识别方法。