Suppr超能文献

基于局部到全局引导的一次性任意场景人群计数

One-Shot Any-Scene Crowd Counting With Local-to-Global Guidance.

作者信息

Chen Jiwei, Wang Zengfu

出版信息

IEEE Trans Image Process. 2024;33:6622-6632. doi: 10.1109/TIP.2024.3420713. Epub 2024 Dec 3.

Abstract

Due to different installation angles, heights, and positions of the camera installation in real-world scenes, it is difficult for crowd counting models to work in unseen surveillance scenes. In this paper, we are interested in accurate crowd counting based on the data collected by any surveillance camera, that is to count the crowd from any scene given only one annotated image from that scene. To this end, we firstly pose crowd counting as a one-shot learning task. Through the metric-learning, we propose a simple yet effective method that firstly estimates crowd characteristics and then transfers them to guide the model to count the crowd. Specifically, to fully capture these crowd characteristics of the target scene, we devise the Multi-Prototype Learner to learn the prototypes of foreground and density from the limited support image using the Expectation-Maximization algorithm. To learn the adaptation capability for any unseen scene, estimated multi prototypes are proposed to guide the crowd counting of query images in a local-to-global way. CNN is utilized to activate the local features. And transformer is introduced to correlate global features. Extensive experiments on three surveillance datasets suggest that our method outperforms the SOTA methods in the few-shot crowd counting.

摘要

由于在现实场景中摄像头安装的角度、高度和位置各不相同,人群计数模型在未见过的监控场景中难以发挥作用。在本文中,我们关注基于任何监控摄像头收集的数据进行准确的人群计数,即仅从一个场景的带注释图像来对该场景中的人群进行计数。为此,我们首先将人群计数视为一次性学习任务。通过度量学习,我们提出了一种简单而有效的方法,该方法首先估计人群特征,然后将其转移以指导模型进行人群计数。具体而言,为了充分捕捉目标场景的这些人群特征,我们设计了多原型学习器,使用期望最大化算法从有限的支持图像中学习前景和密度的原型。为了学习对任何未见场景的适应能力,提出估计的多原型以局部到全局的方式指导查询图像的人群计数。利用卷积神经网络(CNN)激活局部特征,并引入Transformer来关联全局特征。在三个监控数据集上进行的大量实验表明,我们的方法在少样本人群计数方面优于当前最优方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验