Yang Yang, Wang Guan'an, Tiwari Prayag, Pandey Hari Mohan, Lei Zhen
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4220-4232. doi: 10.1109/TNNLS.2021.3128269. Epub 2025 Feb 28.
Recently, unsupervised cross-dataset person reidentification (Re-ID) has attracted more and more attention, which aims to transfer knowledge of a labeled source domain to an unlabeled target domain. There are two common frameworks: one is pixel-alignment of transferring low-level knowledge, and the other is feature-alignment of transferring high-level knowledge. In this article, we propose a novel recurrent autoencoder (RAE) framework to unify these two kinds of methods and inherit their merits. Specifically, the proposed RAE includes three modules, i.e., a feature-transfer (FT) module, a pixel-transfer (PT) module, and a fusion module. The FT module utilizes an encoder to map source and target images to a shared feature space. In the space, not only features are identity-discriminative but also the gap between source and target features is reduced. The PT module takes a decoder to reconstruct original images with its features. Here, we hope that the images reconstructed from target features are in the source style. Thus, the low-level knowledge can be propagated to the target domain. After transferring both high-and low-level knowledge with the two proposed modules above, we design another bilinear pooling layer to fuse both kinds of knowledge. Extensive experiments on Market-1501, DukeMTMC-ReID, and MSMT17 datasets show that our method significantly outperforms either pixel-alignment or feature-alignment Re-ID methods and achieves new state-of-the-art results.
最近,无监督跨数据集行人重识别(Re-ID)越来越受到关注,其目的是将有标签源域的知识转移到无标签目标域。有两种常见的框架:一种是转移低级知识的像素对齐,另一种是转移高级知识的特征对齐。在本文中,我们提出了一种新颖的循环自动编码器(RAE)框架来统一这两种方法并继承它们的优点。具体来说,所提出的RAE包括三个模块,即特征转移(FT)模块、像素转移(PT)模块和融合模块。FT模块利用编码器将源图像和目标图像映射到共享特征空间。在该空间中,不仅特征具有身份判别性,而且源特征和目标特征之间的差距也减小了。PT模块使用解码器根据其特征重建原始图像。在此,我们希望从目标特征重建的图像具有源风格。这样,低级知识就可以传播到目标域。在使用上述两个模块转移高级和低级知识之后,我们设计了另一个双线性池化层来融合这两种知识。在Market-1501、DukeMTMC-ReID和MSMT17数据集上进行的大量实验表明,我们的方法显著优于像素对齐或特征对齐的Re-ID方法,并取得了新的最优结果。