Lu Hai, Luo Enbo, Feng Yong, Wang Yifan
Electric Power Research Institute of Yunnan Power Grid Co., Ltd., Kunming 650217, China.
Math Biosci Eng. 2024 Jul 23;21(7):6694-6709. doi: 10.3934/mbe.2024293.
In recent years, significant progress has been made in video-based person re-identification (Re-ID). The key challenge in video person Re-ID lies in effectively constructing discriminative and robust person feature representations. Methods based on local regions utilize spatial and temporal attention to extract representative local features. However, prior approaches often overlook the correlations between local regions. To leverage relationships among different local regions, we have proposed a novel video person Re-ID representation learning approach based on a graph transformer, which facilitates contextual interactions between relevant region features. Specifically, we construct a local relation graph to model intrinsic relationships between nodes representing local regions. This graph employs the architecture of a transformer for feature propagation, iteratively refining region features and considering information from adjacent nodes to obtain partial feature representations. To learn compact and discriminative representations, we have further proposed a global feature learning branch based on a vision transformer to capture the relationships between different frames in a sequence. Additionally, we designed a dual-branch interaction network based on multi-head fusion attention to integrate frame-level features extracted by both local and global branches. Finally, the concatenated global and local features, after interaction, are used for testing. We evaluated the proposed method on three datasets, namely iLIDS-VID, MARS, and DukeMTMC-VideoReID. Experimental results demonstrate competitive performance, validating the effectiveness of our proposed approach.
近年来,基于视频的行人重识别(Re-ID)取得了显著进展。视频行人重识别的关键挑战在于有效地构建具有判别力和鲁棒性的行人特征表示。基于局部区域的方法利用空间和时间注意力来提取具有代表性的局部特征。然而,先前的方法往往忽略了局部区域之间的相关性。为了利用不同局部区域之间的关系,我们提出了一种基于图变换器的新颖视频行人Re-ID表示学习方法,该方法促进了相关区域特征之间的上下文交互。具体来说,我们构建了一个局部关系图来建模表示局部区域的节点之间的内在关系。该图采用变换器架构进行特征传播,迭代地细化区域特征并考虑来自相邻节点的信息以获得局部特征表示。为了学习紧凑且具有判别力的表示,我们进一步提出了一个基于视觉变换器的全局特征学习分支,以捕捉序列中不同帧之间的关系。此外,我们设计了一个基于多头融合注意力的双分支交互网络,以整合由局部和全局分支提取的帧级特征。最后,交互后的全局和局部特征连接起来用于测试。我们在三个数据集上评估了所提出的方法,即iLIDS-VID、MARS和DukeMTMC-VideoReID。实验结果表明该方法具有竞争力的性能,验证了我们所提方法的有效性。