• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于群组稀疏性的视频传感器被遮挡行人属性识别。

Occluded Pedestrian-Attribute Recognition for Video Sensors Using Group Sparsity.

机构信息

College of Information Technology, Gachon University, Sengnam 13120, Korea.

Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute, Daejeon 34129, Korea.

出版信息

Sensors (Basel). 2022 Sep 1;22(17):6626. doi: 10.3390/s22176626.

DOI:10.3390/s22176626
PMID:36081084
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9460213/
Abstract

Pedestrians are often obstructed by other objects or people in real-world vision sensors. These obstacles make pedestrian-attribute recognition (PAR) difficult; hence, occlusion processing for visual sensing is a key issue in PAR. To address this problem, we first formulate the identification of non-occluded frames as temporal attention based on the sparsity of a crowded video. In other words, a model for PAR is guided to prevent paying attention to the occluded frame. However, we deduced that this approach cannot include a correlation between attributes when occlusion occurs. For example, "boots" and "shoe color" cannot be recognized simultaneously when the foot is invisible. To address the uncorrelated attention issue, we propose a novel temporal-attention module based on group sparsity. Group sparsity is applied across attention weights in correlated attributes. Accordingly, physically-adjacent pedestrian attributes are grouped, and the attention weights of a group are forced to focus on the same frames. Experimental results indicate that the proposed method achieved 1.18% and 6.21% higher F1-scores than the advanced baseline method on the occlusion samples in DukeMTMC-VideoReID and MARS video-based PAR datasets, respectively.

摘要

行人在现实世界的视觉传感器中经常会被其他物体或人挡住。这些障碍物使得行人属性识别 (PAR) 变得困难;因此,视觉传感器的遮挡处理是 PAR 的一个关键问题。为了解决这个问题,我们首先根据拥挤视频的稀疏性,将未遮挡帧的识别形式化为基于时间的注意力。换句话说,PAR 模型被引导以防止关注遮挡帧。然而,我们推断出,当发生遮挡时,这种方法不能包含属性之间的相关性。例如,当脚部不可见时,“靴子”和“鞋的颜色”无法同时被识别。为了解决不相关注意力的问题,我们提出了一种基于组稀疏性的新的时间注意力模块。组稀疏性应用于相关属性的注意力权重中。因此,将物理上相邻的行人属性分组,并迫使组的注意力权重集中在同一帧上。实验结果表明,在 DukeMTMC-VideoReID 和 MARS 基于视频的 PAR 数据集的遮挡样本上,与先进的基线方法相比,所提出的方法的 F1 分数分别提高了 1.18%和 6.21%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/04d017291a07/sensors-22-06626-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/57df4701e2ed/sensors-22-06626-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/8d025a339c4d/sensors-22-06626-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/8e1d8883bea0/sensors-22-06626-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/1c7ca156dd6c/sensors-22-06626-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/170651c0d9b0/sensors-22-06626-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/8833d1f0f42a/sensors-22-06626-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/04d017291a07/sensors-22-06626-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/57df4701e2ed/sensors-22-06626-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/8d025a339c4d/sensors-22-06626-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/8e1d8883bea0/sensors-22-06626-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/1c7ca156dd6c/sensors-22-06626-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/170651c0d9b0/sensors-22-06626-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/8833d1f0f42a/sensors-22-06626-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3185/9460213/04d017291a07/sensors-22-06626-g007.jpg

相似文献

1
Occluded Pedestrian-Attribute Recognition for Video Sensors Using Group Sparsity.基于群组稀疏性的视频传感器被遮挡行人属性识别。
Sensors (Basel). 2022 Sep 1;22(17):6626. doi: 10.3390/s22176626.
2
Attention Based CNN-ConvLSTM for Pedestrian Attribute Recognition.基于注意力机制的 CNN-ConvLSTM 用于行人属性识别。
Sensors (Basel). 2020 Feb 3;20(3):811. doi: 10.3390/s20030811.
3
Mask-Guided Attention Network and Occlusion-Sensitive Hard Example Mining for Occluded Pedestrian Detection.基于掩模引导注意力网络和遮挡敏感硬例挖掘的遮挡行人检测。
IEEE Trans Image Process. 2021;30:3872-3884. doi: 10.1109/TIP.2020.3040854. Epub 2021 Mar 26.
4
A Richly Annotated Pedestrian Dataset for Person Retrieval in Real Surveillance Scenarios.丰富标注的行人数据集,用于真实监控场景下的人员检索。
IEEE Trans Image Process. 2019 Apr;28(4):1575-1590. doi: 10.1109/TIP.2018.2878349. Epub 2018 Oct 26.
5
Tracking Algorithm of Multiple Pedestrians Based on Particle Filters in Video Sequences.基于视频序列中的粒子滤波器的多行人跟踪算法。
Comput Intell Neurosci. 2016;2016:8163878. doi: 10.1155/2016/8163878. Epub 2016 Oct 25.
6
Pedestrian re-identification based on attention mechanism and Multi-scale feature fusion.基于注意力机制和多尺度特征融合的行人再识别。
Math Biosci Eng. 2023 Aug 25;20(9):16913-16938. doi: 10.3934/mbe.2023754.
7
Pedestrian attribute recognition using two-branch trainable Gabor wavelets network.使用双分支可训练 Gabor 小波网络的行人属性识别。
PLoS One. 2021 Jun 1;16(6):e0251667. doi: 10.1371/journal.pone.0251667. eCollection 2021.
8
A Boosted Multi-Task Model for Pedestrian Detection With Occlusion Handling.基于遮挡处理的行人检测的增强多任务模型。
IEEE Trans Image Process. 2015 Dec;24(12):5619-29. doi: 10.1109/TIP.2015.2483376. Epub 2015 Sep 28.
9
An End-to-End Foreground-Aware Network for Person Re-Identification.一种端到端的前景感知网络,用于人像再识别。
IEEE Trans Image Process. 2021;30:2060-2071. doi: 10.1109/TIP.2021.3050839. Epub 2021 Jan 21.
10
Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.基于分层核稀疏表示的鲁棒行人分类
Sensors (Basel). 2016 Aug 16;16(8):1296. doi: 10.3390/s16081296.

本文引用的文献

1
A Binarized Segmented ResNet Based on Edge Computing for Re-Identification.基于边缘计算的二值化分段 ResNet 再识别。
Sensors (Basel). 2020 Dec 3;20(23):6902. doi: 10.3390/s20236902.
2
Focus on the Visible Regions: Semantic-Guided Alignment Model for Occluded Person Re-Identification.关注可见区域:遮挡行人再识别的语义引导对齐模型。
Sensors (Basel). 2020 Aug 8;20(16):4431. doi: 10.3390/s20164431.
3
Attention Based CNN-ConvLSTM for Pedestrian Attribute Recognition.基于注意力机制的 CNN-ConvLSTM 用于行人属性识别。
Sensors (Basel). 2020 Feb 3;20(3):811. doi: 10.3390/s20030811.
4
3D convolutional neural networks for human action recognition.三维卷积神经网络的人体动作识别。
IEEE Trans Pattern Anal Mach Intell. 2013 Jan;35(1):221-31. doi: 10.1109/TPAMI.2012.59.