Suppr超能文献

基于部件的深度哈希用于大规模的行人再识别。

Part-Based Deep Hashing for Large-Scale Person Re-Identification.

出版信息

IEEE Trans Image Process. 2017 Oct;26(10):4806-4817. doi: 10.1109/TIP.2017.2695101. Epub 2017 Apr 18.

Abstract

Large-scale is a trend in person re-identi- fication (re-id). It is important that real-time search be performed in a large gallery. While previous methods mostly focus on discriminative learning, this paper makes the attempt in integrating deep learning and hashing into one framework to evaluate the efficiency and accuracy for large-scale person re-id. We integrate spatial information for discriminative visual representation by partitioning the pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing (PDH) is proposed, in which batches of triplet samples are employed as the input of the deep hashing architecture. Each triplet sample contains two pedestrian images (or parts) with the same identity and one pedestrian image (or part) of the different identity. A triplet loss function is employed with a constraint that the Hamming distance of pedestrian images (or parts) with the same identity is smaller than ones with the different identity. In the experiment, we show that the proposed PDH method yields very competitive re-id accuracy on the large-scale Market-1501 and Market-1501+500K datasets.

摘要

大规模是人员再识别(re-id)的一个趋势。在大型图库中进行实时搜索很重要。虽然以前的方法主要集中在判别式学习上,但本文尝试将深度学习和哈希集成到一个框架中,以评估大规模人员 re-id 的效率和准确性。我们通过将行人图像划分为水平部分来集成空间信息以进行有判别力的视觉表示。具体来说,提出了基于部分的深度哈希(PDH),其中将批量三元组样本用作深度哈希架构的输入。每个三元组样本包含两个具有相同身份的行人图像(或部分)和一个具有不同身份的行人图像(或部分)。使用三元组损失函数,并施加约束,即具有相同身份的行人图像(或部分)的汉明距离小于具有不同身份的行人图像(或部分)的汉明距离。在实验中,我们表明,所提出的 PDH 方法在大规模 Market-1501 和 Market-1501+500K 数据集上产生了非常有竞争力的再识别精度。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验