• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对比学习增强伪标签用于行人重识别中的无监督域适应

Contrastive learning enhanced pseudo-labeling for unsupervised domain adaptation in person re-identification.

作者信息

Bai Xuemei, Zhang Yuqing, Zhang Chenjie, Wang Zhijun

机构信息

School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, Jinlin, China.

High Performance Computing Center, Changchun Normal University, Changchun, Jinlin, China.

出版信息

PLoS One. 2025 Jul 14;20(7):e0328131. doi: 10.1371/journal.pone.0328131. eCollection 2025.

DOI:10.1371/journal.pone.0328131
PMID:40658748
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12258581/
Abstract

Person re-identification (ReID) technology has many applications in intelligent surveillance and public safety. However, the domain difference between the source and target domains makes the generalization ability of the model extremely challenging. To reduce the dependence on labeled data, Unsupervised Domain Adaptation (UDA) methods have become an effective way to solve this problem. However, the influence of pseudo-label generated noise on model training in existing UDA methods is still significant, resulting in limited model performance on the target domain. For this reason, this paper proposes a contrast learning-based pseudo-label refinement with probabilistic uncertainty in the unsupervised domain, adapted to Person re-identification, aiming to improve the effectiveness of the unsupervised domain adapted to Person re-identification. We first enhance the feature representation of the target domain samples based on the contrast learning technique to improve their discrimination in the feature space, thereby enhancing the cross-domain migration performance of the model. Subsequently, an innovative loss function is proposed to effectively reduce the interference of label noise on the training process by refining the generation process of pseudo-labels, which solves the negative impact of inaccurate pseudo-labels on model training. Through a series of experimental validation, the method experiments on two large-scale public datasets, Market1501 and DukeMTMC, and the Rank-1 accuracy of the proposed method reaches 91.4% and 81.4%, with the mean average precision (mAP) of 79.0% and 67.9%, respectively, which proves that the research in this paper provides a good solution for the Person re-identification task with effective technical support for label noise processing and model generalization capability improvement.

摘要

行人重识别(ReID)技术在智能监控和公共安全领域有诸多应用。然而,源域和目标域之间的领域差异使得模型的泛化能力极具挑战性。为了减少对标记数据的依赖,无监督域适应(UDA)方法已成为解决此问题的有效途径。然而,现有UDA方法中伪标签生成噪声对模型训练的影响仍然很大,导致模型在目标域上的性能有限。因此,本文提出了一种基于对比学习的无监督域中具有概率不确定性的伪标签细化方法,适用于行人重识别,旨在提高无监督域适应行人重识别的有效性。我们首先基于对比学习技术增强目标域样本的特征表示,以提高其在特征空间中的辨别能力,从而增强模型的跨域迁移性能。随后,提出了一种创新的损失函数,通过细化伪标签的生成过程来有效减少标签噪声对训练过程的干扰,解决了不准确伪标签对模型训练的负面影响。通过一系列实验验证,该方法在两个大规模公共数据集Market1501和DukeMTMC上进行实验,所提方法的Rank-1准确率分别达到91.4%和81.4%,平均精度均值(mAP)分别为79.0%和67.9%,这证明本文的研究为行人重识别任务提供了一个很好的解决方案,为标签噪声处理和模型泛化能力提升提供了有效的技术支持。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/21876b50af30/pone.0328131.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/1d05444b01db/pone.0328131.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/58e678af7630/pone.0328131.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/fc4ff47ceacf/pone.0328131.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/a7f20aae924e/pone.0328131.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/41555214fcbd/pone.0328131.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/c4c2085de657/pone.0328131.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/cf38891bf097/pone.0328131.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/665f3e0d90a8/pone.0328131.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/ea79c5c1a840/pone.0328131.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/66ff75eaaaff/pone.0328131.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/e0ddde57bf32/pone.0328131.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/fc5c613c2c46/pone.0328131.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/394b2cc3ded8/pone.0328131.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/895af4ca8d20/pone.0328131.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/21876b50af30/pone.0328131.g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/1d05444b01db/pone.0328131.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/58e678af7630/pone.0328131.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/fc4ff47ceacf/pone.0328131.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/a7f20aae924e/pone.0328131.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/41555214fcbd/pone.0328131.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/c4c2085de657/pone.0328131.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/cf38891bf097/pone.0328131.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/665f3e0d90a8/pone.0328131.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/ea79c5c1a840/pone.0328131.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/66ff75eaaaff/pone.0328131.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/e0ddde57bf32/pone.0328131.g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/fc5c613c2c46/pone.0328131.g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/394b2cc3ded8/pone.0328131.g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/895af4ca8d20/pone.0328131.g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ede6/12258581/21876b50af30/pone.0328131.g015.jpg

相似文献

1
Contrastive learning enhanced pseudo-labeling for unsupervised domain adaptation in person re-identification.对比学习增强伪标签用于行人重识别中的无监督域适应
PLoS One. 2025 Jul 14;20(7):e0328131. doi: 10.1371/journal.pone.0328131. eCollection 2025.
2
Short-Term Memory Impairment短期记忆障碍
3
Unsupervised cross-modality domain adaptation via source-domain labels guided contrastive learning for medical image segmentation.通过源域标签引导的对比学习实现医学图像分割的无监督跨模态域适应
Med Biol Eng Comput. 2025 Feb 13. doi: 10.1007/s11517-025-03312-2.
4
Stabilizing machine learning for reproducible and explainable results: A novel validation approach to subject-specific insights.稳定机器学习以获得可重复和可解释的结果:一种针对特定个体见解的新型验证方法。
Comput Methods Programs Biomed. 2025 Jun 21;269:108899. doi: 10.1016/j.cmpb.2025.108899.
5
A Weight-Aware-Based Multisource Unsupervised Domain Adaptation Method for Human Motion Intention Recognition.一种基于权重感知的多源无监督域自适应人体运动意图识别方法。
IEEE Trans Cybern. 2025 Jul;55(7):3131-3143. doi: 10.1109/TCYB.2025.3565754.
6
A confidence-guided Unsupervised domain adaptation network with pseudo-labeling and deformable CNN-transformer for medical image segmentation.一种用于医学图像分割的具有伪标签和可变形卷积神经网络-Transformer的置信度引导无监督域适应网络。
Neural Netw. 2025 Nov;191:107844. doi: 10.1016/j.neunet.2025.107844. Epub 2025 Jul 8.
7
Unsupervised heterogeneous domain adaptation for EEG classification.无监督异质域自适应脑电图分类。
J Neural Eng. 2024 Jul 16;21(4). doi: 10.1088/1741-2552/ad5fbd.
8
Unsupervised domain adaptation multi-level adversarial learning-based crossing-domain retinal vessel segmentation.基于无监督域自适应多层次对抗学习的跨域视网膜血管分割。
Comput Biol Med. 2024 Aug;178:108759. doi: 10.1016/j.compbiomed.2024.108759. Epub 2024 Jun 24.
9
Sexual Harassment and Prevention Training性骚扰与预防培训
10
Leveraging a foundation model zoo for cell similarity search in oncological microscopy across devices.利用基础模型库进行跨设备肿瘤显微镜检查中的细胞相似性搜索。
Front Oncol. 2025 Jun 18;15:1480384. doi: 10.3389/fonc.2025.1480384. eCollection 2025.

本文引用的文献

1
HMIL: Hierarchical Multi-Instance Learning for Fine-Grained Whole Slide Image Classification.HMIL:用于细粒度全切片图像分类的分层多实例学习
IEEE Trans Med Imaging. 2025 Apr;44(4):1796-1808. doi: 10.1109/TMI.2024.3520602. Epub 2025 Apr 3.
2
Self-Training With Progressive Representation Enhancement for Unsupervised Cross-Domain Person Re-Identification.基于渐进式表示增强的自训练在无监督跨域行人再识别中的应用。
IEEE Trans Image Process. 2021;30:5287-5298. doi: 10.1109/TIP.2021.3082298. Epub 2021 Jun 2.
3
Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.
基于示例卷积神经网络的判别式无监督特征学习。
IEEE Trans Pattern Anal Mach Intell. 2016 Sep;38(9):1734-47. doi: 10.1109/TPAMI.2015.2496141. Epub 2015 Oct 29.