• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于局部与全局注意力机制及组合池化的行人重识别研究

Research on Person Re-Identification through Local and Global Attention Mechanisms and Combination Poolings.

作者信息

Zhou Jieqian, Zhao Shuai, Li Shengjie, Cheng Bo, Chen Junliang

机构信息

State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China.

出版信息

Sensors (Basel). 2024 Aug 30;24(17):5638. doi: 10.3390/s24175638.

DOI:10.3390/s24175638
PMID:39275548
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11398040/
Abstract

This research proposes constructing a network used for person re-identification called MGNACP (Multiple Granularity Network with Attention Mechanisms and Combination Poolings). Based on the MGN (Multiple Granularity Network) that combines global and local features and the characteristics of the MGN branch, the MGNA (Multiple Granularity Network with Attentions) is designed by adding a channel attention mechanism to each global and local branch of the MGN. The MGNA, with attention mechanisms, learns the most identifiable information about global and local features to improve the person re-identification accuracy. Based on the constructed MGNA, a single pooling used in each branch is replaced by combination pooling to form MGNACP. The combination pooling parameters are the proportions of max pooling and average pooling in combination pooling. Through experiments, suitable combination pooling parameters are found, the advantages of max pooling and average pooling are preserved and enhanced, and the disadvantages of both types of pooling are overcome, so that poolings can achieve optimal results in MGNACP and improve the person re-identification accuracy. In experiments on the Market-1501 dataset, MGNACP achieved competitive experimental results; the values of mAP and top-1 are 88.82% and 95.46%. The experimental results demonstrate that MGNACP is a competitive person re-identification network, and that the attention mechanisms and combination poolings can significantly improve the person re-identification accuracy.

摘要

本研究提出构建一个用于行人重识别的网络,称为MGNACP(具有注意力机制和组合池化的多粒度网络)。基于结合全局和局部特征的MGN(多粒度网络)以及MGN分支的特点,通过在MGN的每个全局和局部分支中添加通道注意力机制来设计MGNA(具有注意力的多粒度网络)。带有注意力机制的MGNA学习全局和局部特征中最具辨识度的信息,以提高行人重识别准确率。基于构建好的MGNA,将每个分支中使用的单个池化替换为组合池化,从而形成MGNACP。组合池化参数是组合池化中最大池化和平均池化的比例。通过实验,找到了合适的组合池化参数,保留并增强了最大池化和平均池化的优点,克服了这两种池化的缺点,使得池化在MGNACP中能够取得最优结果,并提高行人重识别准确率。在Market-1501数据集上的实验中,MGNACP取得了具有竞争力的实验结果;mAP和top-1的值分别为88.82%和95.46%。实验结果表明,MGNACP是一个具有竞争力的行人重识别网络,注意力机制和组合池化能够显著提高行人重识别准确率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/dcfe7c16af89/sensors-24-05638-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/1c7bfe045527/sensors-24-05638-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/af4d9ce653ed/sensors-24-05638-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/76b0e45c862b/sensors-24-05638-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/bf6a8c7b7bba/sensors-24-05638-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/9ddbda812f99/sensors-24-05638-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/b9e03fdcac57/sensors-24-05638-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/6584fced2ed9/sensors-24-05638-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/a21d2b2a9ede/sensors-24-05638-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/eae17d0f3069/sensors-24-05638-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/0168308690e1/sensors-24-05638-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/f26a1769683f/sensors-24-05638-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/dcfe7c16af89/sensors-24-05638-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/1c7bfe045527/sensors-24-05638-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/af4d9ce653ed/sensors-24-05638-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/76b0e45c862b/sensors-24-05638-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/bf6a8c7b7bba/sensors-24-05638-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/9ddbda812f99/sensors-24-05638-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/b9e03fdcac57/sensors-24-05638-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/6584fced2ed9/sensors-24-05638-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/a21d2b2a9ede/sensors-24-05638-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/eae17d0f3069/sensors-24-05638-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/0168308690e1/sensors-24-05638-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/f26a1769683f/sensors-24-05638-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5587/11398040/dcfe7c16af89/sensors-24-05638-g012.jpg

相似文献

1
Research on Person Re-Identification through Local and Global Attention Mechanisms and Combination Poolings.基于局部与全局注意力机制及组合池化的行人重识别研究
Sensors (Basel). 2024 Aug 30;24(17):5638. doi: 10.3390/s24175638.
2
Multi-granularity graph pooling for video-based person re-identification.基于视频的行人再识别的多粒度图池化。
Neural Netw. 2023 Mar;160:22-33. doi: 10.1016/j.neunet.2022.12.015. Epub 2022 Dec 28.
3
Holstein Cattle Face Re-Identification Unifying Global and Part Feature Deep Network with Attention Mechanism.荷斯坦奶牛面部重新识别:融合全局与局部特征的深度网络及注意力机制
Animals (Basel). 2022 Apr 18;12(8):1047. doi: 10.3390/ani12081047.
4
TwinsReID: Person re-identification based on twins transformer's multi-level features.TwinsReID:基于 Twins Transformer 的多层次特征的行人重识别。
Math Biosci Eng. 2023 Jan;20(2):2110-2130. doi: 10.3934/mbe.2023098. Epub 2022 Nov 14.
5
Dual Branch Attention Network for Person Re-Identification.双分支注意力网络的行人再识别。
Sensors (Basel). 2021 Aug 30;21(17):5839. doi: 10.3390/s21175839.
6
A Multi-Attention Approach for Person Re-Identification Using Deep Learning.基于深度学习的多注意力机制行人再识别方法。
Sensors (Basel). 2023 Apr 2;23(7):3678. doi: 10.3390/s23073678.
7
Multi-Level Fusion Temporal-Spatial Co-Attention for Video-Based Person Re-Identification.用于基于视频的行人重识别的多级融合时空协同注意力
Entropy (Basel). 2021 Dec 15;23(12):1686. doi: 10.3390/e23121686.
8
Video-based person re-identification with complementary local and global features using a graph transformer.基于视频的人物再识别,使用图变换器融合互补的局部和全局特征。
Math Biosci Eng. 2024 Jul 23;21(7):6694-6709. doi: 10.3934/mbe.2024293.
9
Multi-Biometric Unified Network for Cloth-Changing Person Re-Identification.用于换衣行人重识别的多生物特征统一网络
IEEE Trans Image Process. 2023;32:4555-4566. doi: 10.1109/TIP.2023.3279673.
10
Heterogeneous feature-aware Transformer-CNN coupling network for person re-identification.用于行人重识别的异构特征感知Transformer-CNN耦合网络
PeerJ Comput Sci. 2022 Sep 27;8:e1098. doi: 10.7717/peerj-cs.1098. eCollection 2022.

引用本文的文献

1
Identity Hides in Darkness: Learning Feature Discovery Transformer for Nighttime Person Re-Identification.身份隐匿于黑暗之中:用于夜间行人重识别的学习特征发现Transformer
Sensors (Basel). 2025 Jan 31;25(3):862. doi: 10.3390/s25030862.
2
Unleashing the Potential of Pre-Trained Diffusion Models for Generalizable Person Re-Identification.释放预训练扩散模型在通用行人重识别方面的潜力。
Sensors (Basel). 2025 Jan 18;25(2):552. doi: 10.3390/s25020552.

本文引用的文献

1
SR-DSFF and FENet-ReID: A Two-Stage Approach for Cross Resolution Person Re-Identification.SR-DSFF 和 FENet-ReID:一种跨分辨率人像再识别的两阶段方法。
Comput Intell Neurosci. 2022 Jul 5;2022:4398727. doi: 10.1155/2022/4398727. eCollection 2022.
2
MHSA-Net: Multihead Self-Attention Network for Occluded Person Re-Identification.MHSA-Net:用于遮挡行人再识别的多头自注意力网络
IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):8210-8224. doi: 10.1109/TNNLS.2022.3144163. Epub 2023 Oct 27.
3
Squeeze-and-Excitation Networks.
挤压激励网络。
IEEE Trans Pattern Anal Mach Intell. 2020 Aug;42(8):2011-2023. doi: 10.1109/TPAMI.2019.2913372. Epub 2019 Apr 29.