• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于行人重识别的动态加权网络

Dynamic Weighting Network for Person Re-Identification.

作者信息

Li Guang, Liu Peng, Cao Xiaofan, Liu Chunguang

机构信息

School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China.

Yangzhong Intelligent Electric Research Center, North China Electric Power University, Yangzhong 212211, China.

出版信息

Sensors (Basel). 2023 Jun 14;23(12):5579. doi: 10.3390/s23125579.

DOI:10.3390/s23125579
PMID:37420745
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10304122/
Abstract

Recently, hybrid Convolution-Transformer architectures have become popular due to their ability to capture both local and global image features and the advantage of lower computational cost over pure Transformer models. However, directly embedding a Transformer can result in the loss of convolution-based features, particularly fine-grained features. Therefore, using these architectures as the backbone of a re-identification task is not an effective approach. To address this challenge, we propose a feature fusion gate unit that dynamically adjusts the ratio of local and global features. The feature fusion gate unit fuses the convolution and self-attentive branches of the network with dynamic parameters based on the input information. This unit can be integrated into different layers or multiple residual blocks, which will have varying effects on the accuracy of the model. Using feature fusion gate units, we propose a simple and portable model called the dynamic weighting network or DWNet, which supports two backbones, ResNet and OSNet, called DWNet-R and DWNet-O, respectively. DWNet significantly improves re-identification performance over the original baseline, while maintaining reasonable computational consumption and number of parameters. Finally, our DWNet-R achieves an mAP of 87.53%, 79.18%, 50.03%, on the Market1501, DukeMTMC-reID, and MSMT17 datasets. Our DWNet-O achieves an mAP of 86.83%, 78.68%, 55.66%, on the Market1501, DukeMTMC-reID, and MSMT17 datasets.

摘要

最近,混合卷积-Transformer架构因其能够捕捉局部和全局图像特征以及相对于纯Transformer模型具有更低计算成本的优势而变得流行。然而,直接嵌入Transformer可能会导致基于卷积的特征丢失,特别是细粒度特征。因此,将这些架构用作重新识别任务的主干并不是一种有效的方法。为了应对这一挑战,我们提出了一种特征融合门单元,它可以动态调整局部和全局特征的比例。特征融合门单元根据输入信息,将网络的卷积分支和自注意力分支与动态参数进行融合。该单元可以集成到不同的层或多个残差块中,这将对模型的准确性产生不同的影响。使用特征融合门单元,我们提出了一种简单且可移植的模型,称为动态加权网络或DWNet,它支持两种主干,即ResNet和OSNet,分别称为DWNet-R和DWNet-O。DWNet在显著提高重新识别性能的同时,保持了合理的计算消耗和参数数量。最后,我们的DWNet-R在Market1501、DukeMTMC-reID和MSMT17数据集上分别实现了87.53%、79.18%、50.03%的平均精度均值(mAP)。我们的DWNet-O在Market1501、DukeMTMC-reID和MSMT17数据集上分别实现了86.83%、78.68%、55.66%的平均精度均值(mAP)。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/5b0613460980/sensors-23-05579-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/b03cab7fb3d1/sensors-23-05579-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/3242ee36206d/sensors-23-05579-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/323bccecc1e1/sensors-23-05579-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/6f4356d210ae/sensors-23-05579-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/da28ab434bbc/sensors-23-05579-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/93eeaf04781a/sensors-23-05579-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/5b0613460980/sensors-23-05579-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/b03cab7fb3d1/sensors-23-05579-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/3242ee36206d/sensors-23-05579-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/323bccecc1e1/sensors-23-05579-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/6f4356d210ae/sensors-23-05579-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/da28ab434bbc/sensors-23-05579-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/93eeaf04781a/sensors-23-05579-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81ca/10304122/5b0613460980/sensors-23-05579-g007.jpg

相似文献

1
Dynamic Weighting Network for Person Re-Identification.用于行人重识别的动态加权网络
Sensors (Basel). 2023 Jun 14;23(12):5579. doi: 10.3390/s23125579.
2
Integration of Multi-Head Self-Attention and Convolution for Person Re-Identification.多头自注意力与卷积融合的行人再识别
Sensors (Basel). 2022 Aug 21;22(16):6293. doi: 10.3390/s22166293.
3
Seeing Like a Human: Asynchronous Learning With Dynamic Progressive Refinement for Person Re-Identification.像人一样观察:用于人体再识别的异步学习与动态渐进式细化
IEEE Trans Image Process. 2022;31:352-365. doi: 10.1109/TIP.2021.3128330. Epub 2021 Dec 13.
4
Enhancing Person Re-Identification through Attention-Driven Global Features and Angular Loss Optimization.通过注意力驱动的全局特征和角度损失优化增强行人重识别
Entropy (Basel). 2024 May 21;26(6):436. doi: 10.3390/e26060436.
5
TwinsReID: Person re-identification based on twins transformer's multi-level features.TwinsReID:基于 Twins Transformer 的多层次特征的行人重识别。
Math Biosci Eng. 2023 Jan;20(2):2110-2130. doi: 10.3934/mbe.2023098. Epub 2022 Nov 14.
6
Bidirectional Interaction Network for Person Re-Identification.双向交互网络的人像再识别。
IEEE Trans Image Process. 2021;30:1935-1948. doi: 10.1109/TIP.2021.3049943. Epub 2021 Jan 20.
7
TransConver: transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images.TransConver:用于在MRI图像中开发自动脑肿瘤分割的变压器与卷积并行网络。
Quant Imaging Med Surg. 2022 Apr;12(4):2397-2415. doi: 10.21037/qims-21-919.
8
Unsupervised Person Re-Identification with Attention-Guided Fine-Grained Features and Symmetric Contrast Learning.无监督的基于注意力引导的细粒度特征和对称对比学习的行人再识别。
Sensors (Basel). 2022 Sep 15;22(18):6978. doi: 10.3390/s22186978.
9
Stochastic attentions and context learning for person re-identification.用于行人重识别的随机注意力与上下文学习
PeerJ Comput Sci. 2021 May 5;7:e447. doi: 10.7717/peerj-cs.447. eCollection 2021.
10
Euclidean-Distance-Preserved Feature Reduction for efficient person re-identification.基于欧几里得距离保特征降维的高效行人再识别
Neural Netw. 2024 Dec;180:106572. doi: 10.1016/j.neunet.2024.106572. Epub 2024 Aug 8.

引用本文的文献

1
Unleashing the Potential of Pre-Trained Diffusion Models for Generalizable Person Re-Identification.释放预训练扩散模型在通用行人重识别方面的潜力。
Sensors (Basel). 2025 Jan 18;25(2):552. doi: 10.3390/s25020552.

本文引用的文献

1
AAformer: Auto-Aligned Transformer for Person Re-Identification.AAformer:用于行人重识别的自动对齐Transformer。
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17307-17317. doi: 10.1109/TNNLS.2023.3301856. Epub 2024 Dec 2.
2
End-to-End Comparative Attention Networks for Person Re-Identification.端到端对比注意力网络的行人再识别
IEEE Trans Image Process. 2017 Jul;26(7):3492-3506. doi: 10.1109/TIP.2017.2700762. Epub 2017 May 3.