• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过令牌通信和样本聚合约束对分层变压器进行特征微调以实现目标重识别

Feature-Tuning Hierarchical Transformer via token communication and sample aggregation constraint for object re-identification.

作者信息

Yu Zhi, Huang Zhiyong, Hou Mingyang, Pei Jiaming, Yan Yan, Liu Yushi, Sun Daming

机构信息

School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044, China; Key Laboratory of Dependable Service Computing in Cyber Physical Society (Chongqing University), Ministry of Education of China, Chongqing University, Chongqing, 400044, China.

School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044, China.

出版信息

Neural Netw. 2025 Jul;187:107394. doi: 10.1016/j.neunet.2025.107394. Epub 2025 Mar 20.

DOI:10.1016/j.neunet.2025.107394
PMID:40120549
Abstract

Recently, transformer-based methods have shown remarkable success in object re-identification. However, most works directly embed off-the-shelf transformer backbones for feature extraction. These methods treat all patch tokens equally, ignoring the difference of distinct patch tokens for feature representation. To solve this issue, this paper designs a feature-tuning mechanism for transformer backbones to emphasize important patches and attenuate unimportant patches. Specifically, a Feature-tuning Hierarchical Transformer (FHTrans) for object re-identification is proposed. First, we propose a plug-and-play Feature-tuning module via Token Communication (TCF) deployed within transformer encoder blocks. This module regards the class token as a pivot to achieve communication between patch tokens. Important patch tokens are emphasized, while unimportant patch tokens are attenuated, focusing more precisely on the discriminative features related to object distinction. Then, we construct a FHTrans based on the designed feature-tuning module. The encoder blocks are divided into three hierarchies considering the correlation between feature representativeness and transformer depth. As the hierarchy deepens, the communication between tokens becomes tighter. This enables the model to capture more crucial feature information. Finally, we propose a Sample Aggregation (SA) loss to impose more effective constraints on statistical characteristics among samples, thereby enhancing intra-class aggregation and guiding FHTrans to learn more discriminative features. Experiments on object re-identification benchmarks demonstrate that our method can achieve state-of-the-art performance.

摘要

最近,基于Transformer的方法在目标重识别中取得了显著成功。然而,大多数工作直接嵌入现成的Transformer主干进行特征提取。这些方法平等对待所有补丁令牌,忽略了不同补丁令牌在特征表示上的差异。为了解决这个问题,本文为Transformer主干设计了一种特征调整机制,以强调重要补丁并弱化不重要的补丁。具体来说,提出了一种用于目标重识别的特征调整分层Transformer(FHTrans)。首先,我们通过部署在Transformer编码器块内的令牌通信(TCF)提出了一种即插即用的特征调整模块。该模块将类别令牌作为枢纽来实现补丁令牌之间的通信。重要的补丁令牌得到强调,而不重要的补丁令牌则被弱化,更精确地聚焦于与目标区分相关的判别特征。然后,我们基于设计的特征调整模块构建了FHTrans。考虑到特征代表性与Transformer深度之间的相关性,将编码器块分为三个层次。随着层次的加深,令牌之间的通信变得更加紧密。这使模型能够捕获更关键的特征信息。最后,我们提出了一种样本聚合(SA)损失,对样本之间的统计特征施加更有效的约束,从而增强类内聚合并引导FHTrans学习更具判别力的特征。在目标重识别基准上的实验表明,我们的方法可以实现当前最优的性能。

相似文献

1
Feature-Tuning Hierarchical Transformer via token communication and sample aggregation constraint for object re-identification.通过令牌通信和样本聚合约束对分层变压器进行特征微调以实现目标重识别
Neural Netw. 2025 Jul;187:107394. doi: 10.1016/j.neunet.2025.107394. Epub 2025 Mar 20.
2
MCTformer+: Multi-Class Token Transformer for Weakly Supervised Semantic Segmentation.MCTformer+:用于弱监督语义分割的多类令牌变换器
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):8380-8395. doi: 10.1109/TPAMI.2024.3404422. Epub 2024 Nov 6.
3
DRTN: Dual Relation Transformer Network with feature erasure and contrastive learning for multi-label image classification.DRTN:用于多标签图像分类的具有特征擦除和对比学习的双关系Transformer网络。
Neural Netw. 2025 Jul;187:107309. doi: 10.1016/j.neunet.2025.107309. Epub 2025 Mar 3.
4
TLTNet: A novel transscale cascade layered transformer network for enhanced retinal blood vessel segmentation.TLTNet:一种新颖的跨尺度级联分层Transformer 网络,用于增强视网膜血管分割。
Comput Biol Med. 2024 Aug;178:108773. doi: 10.1016/j.compbiomed.2024.108773. Epub 2024 Jun 25.
5
Hierarchical Graph Interaction Transformer With Dynamic Token Clustering for Camouflaged Object Detection.用于伪装目标检测的具有动态令牌聚类的分层图交互变换器
IEEE Trans Image Process. 2024;33:5936-5948. doi: 10.1109/TIP.2024.3475219. Epub 2024 Oct 18.
6
Hierarchical agent transformer network for COVID-19 infection segmentation.用于新冠肺炎感染分割的分层智能体变压器网络
Biomed Phys Eng Express. 2025 Mar 12;11(2). doi: 10.1088/2057-1976/adbafa.
7
Occlusion-Aware Transformer With Second-Order Attention for Person Re-Identification.用于行人重识别的具有二阶注意力的遮挡感知Transformer
IEEE Trans Image Process. 2024;33:3200-3211. doi: 10.1109/TIP.2024.3393360. Epub 2024 May 6.
8
Multi-Scale Efficient Graph-Transformer for Whole Slide Image Classification.多尺度高效图Transformer 用于全幻灯片图像分类。
IEEE J Biomed Health Inform. 2023 Dec;27(12):5926-5936. doi: 10.1109/JBHI.2023.3317067. Epub 2023 Dec 5.
9
HIPA: Hierarchical Patch Transformer for Single Image Super Resolution.HIPA:用于单图像超分辨率的分层补丁转换器。
IEEE Trans Image Process. 2023;32:3226-3237. doi: 10.1109/TIP.2023.3279977. Epub 2023 Jun 6.
10
Pedestrian Re-Identification Based on Fine-Grained Feature Learning and Fusion.基于细粒度特征学习与融合的行人再识别
Sensors (Basel). 2024 Nov 26;24(23):7536. doi: 10.3390/s24237536.