• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

RBDF:可见光红外行人再识别的双向互反框架。

RBDF: Reciprocal Bidirectional Framework for Visible Infrared Person Reidentification.

出版信息

IEEE Trans Cybern. 2022 Oct;52(10):10988-10998. doi: 10.1109/TCYB.2022.3183395. Epub 2022 Sep 19.

DOI:10.1109/TCYB.2022.3183395
PMID:35834459
Abstract

Visible infrared person reidentification (VI-REID) plays a critical role in night-time surveillance applications. Most methods attempt to reduce the cross-modality gap by extracting the modality-shared features. However, they neglect the distinct image-level discrepancies among heterogeneous pedestrian images. In this article, we propose a reciprocal bidirectional framework (RBDF) to achieve modality unification before discriminative feature learning. The bidirectional image translation subnetworks can learn two opposite mappings between visible and infrared modality. Particularly, we investigate the characteristics of the latent space and design a novel associated loss to pull close the distribution between the intermediate representations of two mappings. Mutual interaction between two opposite mappings helps the network generate heterogeneous images that have high similarity with the real images. Hence, the concatenation of original and generated images can eliminate the modality gap. During the feature learning procedure, the attention mechanism-based feature embedding network can learn more discriminative representations with the identity classification and feature metric learning. Experimental results indicate that our method achieves state-of-the-art performance. For instance, we achieve 54.41% mAP and 57.66% rank-1 accuracy on SYSU-MM01 dataset, outperforming the existing works by a large margin.

摘要

可见-近红外行人重识别(VI-REID)在夜间监控应用中起着至关重要的作用。大多数方法试图通过提取模态共享特征来缩小跨模态差距。然而,它们忽略了异构行人图像之间明显的图像级差异。在本文中,我们提出了一种互惠双向框架(RBDF),在进行判别特征学习之前实现模态统一。双向图像翻译子网可以学习可见模态和近红外模态之间的两个相反映射。特别是,我们研究了潜在空间的特征,并设计了一种新的关联损失来拉近两个映射的中间表示之间的分布。两个相反映射之间的相互作用有助于网络生成与真实图像具有高度相似性的异构图像。因此,原始图像和生成图像的拼接可以消除模态差距。在特征学习过程中,基于注意力机制的特征嵌入网络可以通过身份分类和特征度量学习来学习更具判别力的表示。实验结果表明,我们的方法达到了最先进的性能。例如,我们在 SYSU-MM01 数据集上实现了 54.41%的 mAP 和 57.66%的 rank-1 准确率,明显优于现有方法。

相似文献

1
RBDF: Reciprocal Bidirectional Framework for Visible Infrared Person Reidentification.RBDF:可见光红外行人再识别的双向互反框架。
IEEE Trans Cybern. 2022 Oct;52(10):10988-10998. doi: 10.1109/TCYB.2022.3183395. Epub 2022 Sep 19.
2
Flexible Body Partition-Based Adversarial Learning for Visible Infrared Person Re-Identification.基于柔性体分区的可见光红外行人再识别对抗学习
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4676-4687. doi: 10.1109/TNNLS.2021.3059713. Epub 2022 Aug 31.
3
SFANet: A Spectrum-Aware Feature Augmentation Network for Visible-Infrared Person Reidentification.SFANet:一种用于可见光-红外行人重识别的频谱感知特征增强网络。
IEEE Trans Neural Netw Learn Syst. 2023 Apr;34(4):1958-1971. doi: 10.1109/TNNLS.2021.3105702. Epub 2023 Apr 4.
4
Global-Local Multiple Granularity Learning for Cross-Modality Visible-Infrared Person Reidentification.用于跨模态可见光-红外人体重识别的全局-局部多粒度学习
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4209-4219. doi: 10.1109/TNNLS.2021.3085978. Epub 2025 Feb 28.
5
Visible-Infrared Person Re-Identification With Modality-Specific Memory Network.基于模态特定记忆网络的可见-近红外人像再识别
IEEE Trans Image Process. 2022;31:7165-7178. doi: 10.1109/TIP.2022.3220408. Epub 2022 Nov 16.
6
Structure-Aware Positional Transformer for Visible-Infrared Person Re-Identification.基于结构感知的可见光-红外跨模态行人重识别的位置变换模型
IEEE Trans Image Process. 2022;31:2352-2364. doi: 10.1109/TIP.2022.3141868. Epub 2022 Mar 15.
7
Dually Distribution Pulling Network for Cross-Resolution Person Reidentification.用于跨分辨率行人重识别的双分布牵引网络
IEEE Trans Cybern. 2022 Nov;52(11):12016-12027. doi: 10.1109/TCYB.2021.3077500. Epub 2022 Oct 17.
8
Channel semantic mutual learning for visible-thermal person re-identification.通道语义互学习的可见光-热人体再识别。
PLoS One. 2024 Jan 19;19(1):e0293498. doi: 10.1371/journal.pone.0293498. eCollection 2024.
9
Cross-Modality Person Re-Identification via Modality-aware Collaborative Ensemble Learning.通过模态感知协作集成学习实现跨模态行人重识别
IEEE Trans Image Process. 2020 Jun 3;PP. doi: 10.1109/TIP.2020.2998275.
10
Bi-Directional Exponential Angular Triplet Loss for RGB-Infrared Person Re-Identification.用于 RGB-红外人像再识别的双向指数角三元组损失
IEEE Trans Image Process. 2021;30:1583-1595. doi: 10.1109/TIP.2020.3045261. Epub 2021 Jan 11.

引用本文的文献

1
Visible-infrared person re-identification with region-based augmentation and cross modality attention.基于区域增强和跨模态注意力的可见-红外行人重识别
Sci Rep. 2025 May 25;15(1):18225. doi: 10.1038/s41598-025-01979-z.