Liu Zhaoshuo, Feng Chaolu, Chen Shuaizheng, Hu Jun
School of Computer Science and Technology, Northeastern University, Shenyang, 110819, Liaoning, China.
School of Computer Science and Technology, Northeastern University, Shenyang, 110819, Liaoning, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, 110819, Liaoning, China.
Neural Netw. 2023 Apr;161:105-115. doi: 10.1016/j.neunet.2023.01.033. Epub 2023 Feb 1.
Person re-identification (ReID), considered as a sub-problem of image retrieval, is critical for intelligent security. The general practice is to train a deep model on images from a particular scenario (also known as a domain) and perform retrieval tests on images from the same domain. Thus, the model has to be retrained to ensure good performance on unseen domains. Unfortunately, retraining will introduce the so called catastrophic forgetting problem existing in deep learning models. To address this problem, we propose a Continual person re-identification model via a Knowledge-Preserving (CKP) mechanism. The proposed model is able to accumulate knowledge from continuously changing scenarios. The knowledge is updated via a graph attention network from the human cognitive-inspired perspective as the scenario changes. The accumulated knowledge is used to guide the learning process of the proposed model on image samples from new-coming domains. We finally evaluate and compare CKP with fine-tuning, continual learning in image classification and person re-identification, and joint training. Experiments on representative benchmark datasets (Market1501, DukeMTMC, CUHK03, CUHK-SYSU, and MSMT17, which arrive in different orders) demonstrate the advantages of the proposed model in preventing forgetting, and experiments on other benchmark datasets (GRID, SenseReID, CUHK01, CUHK02, VIPER, iLIDS, and PRID, which are not available during training) demonstrate the generalization ability of the proposed model. The CKP outperforms the best comparative model by 0.58% and 0.65% on seen domains (datasets available during training), and by 0.95% and 1.02% on never seen domains (datasets not available during training) in terms of mAP and Rank1, respectively. Arrival order of the training datasets, guidance of accumulated knowledge for learning new knowledge and parameter settings are also discussed.
行人重识别(ReID)被视为图像检索的一个子问题,对智能安全至关重要。一般做法是在来自特定场景(也称为域)的图像上训练深度模型,并对来自同一域的图像进行检索测试。因此,必须重新训练模型以确保在未见域上具有良好性能。不幸的是,重新训练会引入深度学习模型中存在的所谓灾难性遗忘问题。为了解决这个问题,我们提出了一种通过知识保留(CKP)机制的连续行人重识别模型。所提出的模型能够从不断变化的场景中积累知识。随着场景变化,知识通过基于人类认知启发的图注意力网络进行更新。积累的知识用于指导所提出的模型对来自新域的图像样本的学习过程。我们最终将CKP与微调、图像分类和行人重识别中的连续学习以及联合训练进行评估和比较。在代表性基准数据集(Market1501、DukeMTMC、CUHK03、CUHK-SYSU和MSMT17,它们按不同顺序出现)上的实验证明了所提出模型在防止遗忘方面的优势,在其他基准数据集(GRID、SenseReID、CUHK01、CUHK02、VIPER、iLIDS和PRID,它们在训练期间不可用)上的实验证明了所提出模型的泛化能力。在已见域(训练期间可用的数据集)上,CKP在平均精度均值(mAP)和排名第一(Rank1)方面分别比最佳对比模型高出0.58%和0.65%,在未见域(训练期间不可用的数据集)上分别高出0.95%和1.02%。还讨论了训练数据集的到达顺序、积累知识对学习新知识的指导以及参数设置。