• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人群场景分析面临高密度和尺度变化问题。

Crowd Scene Analysis Encounters High Density and Scale Variation.

作者信息

Xue Yao, Li Yonghui, Liu Siming, Zhang Xingjun, Qian Xueming

出版信息

IEEE Trans Image Process. 2021;30:2745-2757. doi: 10.1109/TIP.2021.3049963. Epub 2021 Feb 12.

DOI:10.1109/TIP.2021.3049963
PMID:33502976
Abstract

Crowd scene analysis receives growing attention due to its wide applications. Grasping the accurate crowd location is important for identifying high-risk regions. In this article, we propose a Compressed Sensing based Output Encoding (CSOE) scheme, which casts detecting pixel coordinates of small objects into a task of signal regression in encoding signal space. To prevent gradient vanishing, we derive our own sparse reconstruction backpropagation rule that is adaptive to distinct implementations of sparse reconstruction and makes the whole model end-to-end trainable. With the support of CSOE and the backpropagation rule, the proposed method shows more robustness to deep model training error, which is especially harmful to crowd counting and localization. The proposed method achieves state-of-the-art performance across four mainstream datasets, especially achieves excellent results in highly crowded scenes. A series of analysis and experiments support our claim that regression in CSOE space is better than traditionally detecting coordinates of small objects in pixel space for highly crowded scenes.

摘要

由于人群场景分析具有广泛的应用,因此受到越来越多的关注。掌握准确的人群位置对于识别高风险区域很重要。在本文中,我们提出了一种基于压缩感知的输出编码(CSOE)方案,该方案将检测小物体的像素坐标转换为编码信号空间中的信号回归任务。为了防止梯度消失,我们推导了自己的稀疏重建反向传播规则,该规则适用于稀疏重建的不同实现,并使整个模型能够端到端地训练。在CSOE和反向传播规则的支持下,所提出的方法对深度模型训练误差表现出更强的鲁棒性,而深度模型训练误差对人群计数和定位尤其有害。该方法在四个主流数据集上取得了领先的性能,尤其在高度拥挤的场景中取得了优异的结果。一系列分析和实验支持了我们的观点,即在高度拥挤的场景中,CSOE空间中的回归比传统的在像素空间中检测小物体的坐标要好。

相似文献

1
Crowd Scene Analysis Encounters High Density and Scale Variation.人群场景分析面临高密度和尺度变化问题。
IEEE Trans Image Process. 2021;30:2745-2757. doi: 10.1109/TIP.2021.3049963. Epub 2021 Feb 12.
2
Oriented Localization of Surgical Tools by Location Encoding.通过位置编码实现手术工具的定向定位
IEEE Trans Biomed Eng. 2022 Apr;69(4):1469-1480. doi: 10.1109/TBME.2021.3120430. Epub 2022 Mar 18.
3
Crowd Counting Based on Multiscale Spatial Guided Perception Aggregation Network.基于多尺度空间引导感知聚合网络的人群计数
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17465-17478. doi: 10.1109/TNNLS.2023.3304348. Epub 2024 Dec 2.
4
HADF-Crowd: A Hierarchical Attention-Based Dense Feature Extraction Network for Single-Image Crowd Counting.基于分层注意力的密集特征提取网络用于单幅图像人群计数(HADF-Crowd)。
Sensors (Basel). 2021 May 17;21(10):3483. doi: 10.3390/s21103483.
5
Meta-Knowledge and Multi-Task Learning-Based Multi-Scene Adaptive Crowd Counting.基于元知识和多任务学习的多场景自适应人群计数。
Sensors (Basel). 2022 Apr 26;22(9):3320. doi: 10.3390/s22093320.
6
NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting and Localization.西工大人群计数数据集:大规模人群计数和定位基准数据集
IEEE Trans Pattern Anal Mach Intell. 2021 Jun;43(6):2141-2149. doi: 10.1109/TPAMI.2020.3013269. Epub 2021 May 11.
7
One-Shot Any-Scene Crowd Counting With Local-to-Global Guidance.基于局部到全局引导的一次性任意场景人群计数
IEEE Trans Image Process. 2024;33:6622-6632. doi: 10.1109/TIP.2024.3420713. Epub 2024 Dec 3.
8
Congested Crowd Counting via Adaptive Multi-Scale Context Learning.基于自适应多尺度上下文学习的拥挤人群计数。
Sensors (Basel). 2021 May 29;21(11):3777. doi: 10.3390/s21113777.
9
Redesigning Multi-Scale Neural Network for Crowd Counting.重新设计用于人群计数的多尺度神经网络。
IEEE Trans Image Process. 2023;32:3664-3678. doi: 10.1109/TIP.2023.3289290. Epub 2023 Jul 4.
10
Adversarial Learning for Multiscale Crowd Counting Under Complex Scenes.对抗学习在复杂场景下的多尺度人群计数中的应用。
IEEE Trans Cybern. 2021 Nov;51(11):5423-5432. doi: 10.1109/TCYB.2019.2956091. Epub 2021 Nov 9.