• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于风格不确定性的自定步元学习在通用人像再识别中的应用

Style Uncertainty Based Self-Paced Meta Learning for Generalizable Person Re-Identification.

出版信息

IEEE Trans Image Process. 2023;32:2107-2119. doi: 10.1109/TIP.2023.3263112.

DOI:10.1109/TIP.2023.3263112
PMID:37023142
Abstract

Domain generalizable person re-identification (DG ReID) is a challenging problem, because the trained model is often not generalizable to unseen target domains with different distribution from the source training domains. Data augmentation has been verified to be beneficial for better exploiting the source data to improve the model generalization. However, existing approaches primarily rely on pixel-level image generation that requires designing and training an extra generation network, which is extremely complex and provides limited diversity of augmented data. In this paper, we propose a simple yet effective feature based augmentation technique, named Style-uncertainty Augmentation (SuA). The main idea of SuA is to randomize the style of training data by perturbing the instance style with Gaussian noise during training process to increase the training domain diversity. And to better generalize knowledge across these augmented domains, we propose a progressive learning to learn strategy named Self-paced Meta Learning (SpML) that extends the conventional one-stage meta learning to multi-stage training process. The rationality is to gradually improve the model generalization ability to unseen target domains by simulating the mechanism of human learning. Furthermore, conventional person Re-ID loss functions are unable to leverage the valuable domain information to improve the model generalization. So we further propose a distance-graph alignment loss that aligns the feature relationship distribution among domains to facilitate the network to explore domain-invariant representations of images. Extensive experiments on four large-scale benchmarks demonstrate that our SuA-SpML achieves state-of-the-art generalization to unseen domains for person ReID.

摘要

域泛化的行人再识别(DG ReID)是一个具有挑战性的问题,因为训练的模型通常不能泛化到与源训练域具有不同分布的未见目标域。数据增强已被证明有利于更好地利用源数据来提高模型的泛化能力。然而,现有的方法主要依赖于像素级别的图像生成,这需要设计和训练一个额外的生成网络,这是极其复杂的,并且提供的增强数据的多样性有限。在本文中,我们提出了一种简单而有效的基于特征的增强技术,名为 Style-uncertainty Augmentation(SuA)。SuA 的主要思想是通过在训练过程中用高斯噪声扰动实例样式来随机化训练数据的样式,从而增加训练域的多样性。为了更好地在这些增强的域之间泛化知识,我们提出了一种名为 Self-paced Meta Learning(SpML)的渐进式学习策略,该策略将传统的单阶段元学习扩展到多阶段训练过程。其合理性是通过模拟人类学习的机制,逐步提高模型对未见目标域的泛化能力。此外,传统的行人 Re-ID 损失函数无法利用有价值的域信息来提高模型的泛化能力。因此,我们进一步提出了一种距离图对齐损失,该损失对齐了域间特征关系分布,以帮助网络探索图像的域不变表示。在四个大规模基准上的广泛实验表明,我们的 SuA-SpML 实现了行人 ReID 领域的最新泛化。

相似文献

1
Style Uncertainty Based Self-Paced Meta Learning for Generalizable Person Re-Identification.基于风格不确定性的自定步元学习在通用人像再识别中的应用
IEEE Trans Image Process. 2023;32:2107-2119. doi: 10.1109/TIP.2023.3263112.
2
GCReID: Generalized continual person re-identification via meta learning and knowledge accumulation.GCReID:基于元学习和知识积累的广义连续行人再识别。
Neural Netw. 2024 Nov;179:106561. doi: 10.1016/j.neunet.2024.106561. Epub 2024 Jul 22.
3
Multi-Domain Adversarial Feature Generalization for Person Re-Identification.多领域对抗特征泛化的行人再识别
IEEE Trans Image Process. 2021;30:1596-1607. doi: 10.1109/TIP.2020.3046864. Epub 2021 Jan 11.
4
Learning Domain Invariant Representations for Generalizable Person Re-Identification.用于可泛化行人重识别的学习领域不变表示
IEEE Trans Image Process. 2023;32:509-523. doi: 10.1109/TIP.2022.3229621. Epub 2022 Dec 30.
5
Invariant Content Representation for Generalizable Medical Image Segmentation.用于可泛化医学图像分割的不变内容表示
J Imaging Inform Med. 2024 Dec;37(6):3193-3207. doi: 10.1007/s10278-024-01088-9. Epub 2024 May 17.
6
A Memorizing and Generalizing Framework for Lifelong Person Re-Identification.一种用于终身行人重识别的记忆与泛化框架。
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13567-13585. doi: 10.1109/TPAMI.2023.3297058. Epub 2023 Oct 3.
7
CDDSA: Contrastive domain disentanglement and style augmentation for generalizable medical image segmentation.CDDSA:用于可泛化医学图像分割的对比域解缠和风格增强。
Med Image Anal. 2023 Oct;89:102904. doi: 10.1016/j.media.2023.102904. Epub 2023 Jul 18.
8
Out-of-Domain Generalization From a Single Source: An Uncertainty Quantification Approach.单源跨领域泛化:一种不确定性量化方法。
IEEE Trans Pattern Anal Mach Intell. 2024 Mar;46(3):1775-1787. doi: 10.1109/TPAMI.2022.3184598. Epub 2024 Feb 6.
9
Towards Robust Person Re-Identification by Defending Against Universal Attackers.通过抵御通用攻击者实现鲁棒的行人重识别
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):5218-5235. doi: 10.1109/TPAMI.2022.3199013. Epub 2023 Mar 7.
10
NormAUG: Normalization-Guided Augmentation for Domain Generalization.NormAUG:用于领域泛化的归一化引导增强
IEEE Trans Image Process. 2024;33:1419-1431. doi: 10.1109/TIP.2024.3364516. Epub 2024 Feb 21.

引用本文的文献

1
Unleashing the Potential of Pre-Trained Diffusion Models for Generalizable Person Re-Identification.释放预训练扩散模型在通用行人重识别方面的潜力。
Sensors (Basel). 2025 Jan 18;25(2):552. doi: 10.3390/s25020552.