• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于边界的模态自适应学习的可见光-红外行人再识别

Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification.

机构信息

College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China.

College of Engineering, Huaqiao University, Quanzhou 362021, China.

出版信息

Sensors (Basel). 2023 Jan 27;23(3):1426. doi: 10.3390/s23031426.

DOI:10.3390/s23031426
PMID:36772466
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9921303/
Abstract

Visible-infrared person re-identification (VIPR) has great potential for intelligent transportation systems for constructing smart cities, but it is challenging to utilize due to the huge modal discrepancy between visible and infrared images. Although visible and infrared data can appear to be two domains, VIPR is not identical to domain adaptation as it can massively eliminate modal discrepancies. Because VIPR has complete identity information on both visible and infrared modalities, once the domain adaption is overemphasized, the discriminative appearance information on the visible and infrared domains would drain. For that, we propose a novel margin-based modal adaptive learning (MMAL) method for VIPR in this paper. On each domain, we apply triplet and label smoothing cross-entropy functions to learn appearance-discriminative features. Between the two domains, we design a simple yet effective marginal maximum mean discrepancy (M3D) loss function to avoid an excessive suppression of modal discrepancies to protect the features' discriminative ability on each domain. As a result, our MMAL method could learn modal-invariant yet appearance-discriminative features for improving VIPR. The experimental results show that our MMAL method acquires state-of-the-art VIPR performance, e.g., on the RegDB dataset in the visible-to-infrared retrieval mode, the rank-1 accuracy is 93.24% and the mean average precision is 83.77%.

摘要

可见光-近红外人像再识别(VIPR)在构建智慧城市的智能交通系统中有很大的应用潜力,但由于可见光和红外图像之间存在巨大的模态差异,因此利用起来具有挑战性。虽然可见光和红外数据看起来像是两个不同的领域,但 VIPR 与域自适应不同,因为它可以极大地消除模态差异。由于 VIPR 在可见光和红外两种模态上都具有完整的身份信息,一旦过分强调域自适应,那么在可见光和红外域上的判别外观信息就会流失。为此,我们在本文中提出了一种新颖的基于边缘的模态自适应学习(MMAL)方法用于 VIPR。在每个模态上,我们应用三元组和标签平滑交叉熵函数来学习外观判别特征。在两个模态之间,我们设计了一个简单而有效的边缘最大均值差异(M3D)损失函数,以避免过度抑制模态差异,从而保护每个模态上特征的判别能力。因此,我们的 MMAL 方法可以学习模态不变但外观判别特征,从而提高 VIPR 的性能。实验结果表明,我们的 MMAL 方法在可见光到红外检索模式下的 RegDB 数据集上取得了最先进的 VIPR 性能,例如,排名第一的准确率为 93.24%,平均准确率为 83.77%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/1356bc1a0bdb/sensors-23-01426-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/3b6c694e8467/sensors-23-01426-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/f15ea10a43c5/sensors-23-01426-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/5e80f4ebc244/sensors-23-01426-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/e424cb1f555e/sensors-23-01426-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/07af83e2abfb/sensors-23-01426-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/6cd17b422101/sensors-23-01426-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/9aa95b29858c/sensors-23-01426-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/1356bc1a0bdb/sensors-23-01426-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/3b6c694e8467/sensors-23-01426-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/f15ea10a43c5/sensors-23-01426-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/5e80f4ebc244/sensors-23-01426-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/e424cb1f555e/sensors-23-01426-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/07af83e2abfb/sensors-23-01426-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/6cd17b422101/sensors-23-01426-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/9aa95b29858c/sensors-23-01426-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/081a/9921303/1356bc1a0bdb/sensors-23-01426-g008.jpg

相似文献

1
Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification.基于边界的模态自适应学习的可见光-红外行人再识别
Sensors (Basel). 2023 Jan 27;23(3):1426. doi: 10.3390/s23031426.
2
Cross-modal group-relation optimization for visible-infrared person re-identification.跨模态群组关系优化的可见光-红外人体再识别。
Neural Netw. 2024 Nov;179:106576. doi: 10.1016/j.neunet.2024.106576. Epub 2024 Jul 31.
3
Graph Sampling-Based Multi-Stream Enhancement Network for Visible-Infrared Person Re-Identification.基于图采样的多流增强网络用于可见光-红外行人重识别
Sensors (Basel). 2023 Sep 18;23(18):7948. doi: 10.3390/s23187948.
4
Joint Modal Alignment and Feature Enhancement for Visible-Infrared Person Re-Identification.用于可见光-红外人体重识别的联合模态对齐与特征增强
Sensors (Basel). 2023 May 23;23(11):4988. doi: 10.3390/s23114988.
5
Flexible Body Partition-Based Adversarial Learning for Visible Infrared Person Re-Identification.基于柔性体分区的可见光红外行人再识别对抗学习
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4676-4687. doi: 10.1109/TNNLS.2021.3059713. Epub 2022 Aug 31.
6
Channel semantic mutual learning for visible-thermal person re-identification.通道语义互学习的可见光-热人体再识别。
PLoS One. 2024 Jan 19;19(1):e0293498. doi: 10.1371/journal.pone.0293498. eCollection 2024.
7
CycleTrans: Learning Neutral Yet Discriminative Features via Cycle Construction for Visible- Infrared Person Re-Identification.循环变换:通过循环构建学习用于可见光-红外行人重识别的中性且有区分性的特征
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5469-5479. doi: 10.1109/TNNLS.2024.3382937. Epub 2025 Feb 28.
8
Visible-Infrared Person Re-Identification With Modality-Specific Memory Network.基于模态特定记忆网络的可见-近红外人像再识别
IEEE Trans Image Process. 2022;31:7165-7178. doi: 10.1109/TIP.2022.3220408. Epub 2022 Nov 16.
9
Structure-Aware Positional Transformer for Visible-Infrared Person Re-Identification.基于结构感知的可见光-红外跨模态行人重识别的位置变换模型
IEEE Trans Image Process. 2022;31:2352-2364. doi: 10.1109/TIP.2022.3141868. Epub 2022 Mar 15.
10
Global-Local Multiple Granularity Learning for Cross-Modality Visible-Infrared Person Reidentification.用于跨模态可见光-红外人体重识别的全局-局部多粒度学习
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4209-4219. doi: 10.1109/TNNLS.2021.3085978. Epub 2025 Feb 28.

引用本文的文献

1
Graph Sampling-Based Multi-Stream Enhancement Network for Visible-Infrared Person Re-Identification.基于图采样的多流增强网络用于可见光-红外行人重识别
Sensors (Basel). 2023 Sep 18;23(18):7948. doi: 10.3390/s23187948.
2
Cross-Modality Person Re-Identification via Local Paired Graph Attention Network.基于局部成对图注意网络的跨模态人像再识别。
Sensors (Basel). 2023 Apr 15;23(8):4011. doi: 10.3390/s23084011.

本文引用的文献

1
Structure-Aware Positional Transformer for Visible-Infrared Person Re-Identification.基于结构感知的可见光-红外跨模态行人重识别的位置变换模型
IEEE Trans Image Process. 2022;31:2352-2364. doi: 10.1109/TIP.2022.3141868. Epub 2022 Mar 15.
2
Cross-Domain Graph Convolutions for Adversarial Unsupervised Domain Adaptation.用于对抗性无监督域适应的跨域图卷积
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):3847-3858. doi: 10.1109/TNNLS.2021.3122899. Epub 2023 Aug 4.
3
SFANet: A Spectrum-Aware Feature Augmentation Network for Visible-Infrared Person Reidentification.
SFANet:一种用于可见光-红外行人重识别的频谱感知特征增强网络。
IEEE Trans Neural Netw Learn Syst. 2023 Apr;34(4):1958-1971. doi: 10.1109/TNNLS.2021.3105702. Epub 2023 Apr 4.
4
Hierarchical Connectivity-Centered Clustering for Unsupervised Domain Adaptation on Person Re-Identification.基于分层连接中心聚类的无监督领域自适应行人再识别
IEEE Trans Image Process. 2021;30:6715-6729. doi: 10.1109/TIP.2021.3094140. Epub 2021 Jul 26.
5
Global-Local Multiple Granularity Learning for Cross-Modality Visible-Infrared Person Reidentification.用于跨模态可见光-红外人体重识别的全局-局部多粒度学习
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4209-4219. doi: 10.1109/TNNLS.2021.3085978. Epub 2025 Feb 28.
6
Flexible Body Partition-Based Adversarial Learning for Visible Infrared Person Re-Identification.基于柔性体分区的可见光红外行人再识别对抗学习
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4676-4687. doi: 10.1109/TNNLS.2021.3059713. Epub 2022 Aug 31.
7
Deep Learning for Person Re-Identification: A Survey and Outlook.用于行人重识别的深度学习:综述与展望
IEEE Trans Pattern Anal Mach Intell. 2022 Jun;44(6):2872-2893. doi: 10.1109/TPAMI.2021.3054775. Epub 2022 May 5.
8
Progressive Modality Cooperation for Multi-Modality Domain Adaptation.渐进式模态协同的多模态域自适应。
IEEE Trans Image Process. 2021;30:3293-3306. doi: 10.1109/TIP.2021.3052083. Epub 2021 Mar 3.
9
Contrastive Adaptation Network for Single- and Multi-Source Domain Adaptation.对比适应网络用于单源域和多源域自适应。
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):1793-1804. doi: 10.1109/TPAMI.2020.3029948. Epub 2022 Mar 4.
10
PDAM: A Panoptic-Level Feature Alignment Framework for Unsupervised Domain Adaptive Instance Segmentation in Microscopy Images.PDAM:一种用于显微镜图像无监督领域自适应实例分割的全景级特征对齐框架。
IEEE Trans Med Imaging. 2021 Jan;40(1):154-165. doi: 10.1109/TMI.2020.3023466. Epub 2020 Dec 29.