Suppr超能文献

用于行人动力学中推动行为检测的混合深度学习和可视化框架。

A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics.

机构信息

Institute for Advanced Simulation, Forschungszentrum Jülich, 52425 Jülich, Germany.

Computer Simulation for Fire Protection and Pedestrian Traffic, Faculty of Architecture and Civil Engineering, University of Wuppertal, 42285 Wuppertal, Germany.

出版信息

Sensors (Basel). 2022 May 26;22(11):4040. doi: 10.3390/s22114040.

Abstract

Crowded event entrances could threaten the comfort and safety of pedestrians, especially when some pedestrians push others or use gaps in crowds to gain faster access to an event. Studying and understanding pushing dynamics leads to designing and building more comfortable and safe entrances. Researchers-to understand pushing dynamics-observe and analyze recorded videos to manually identify when and where pushing behavior occurs. Despite the accuracy of the manual method, it can still be time-consuming, tedious, and hard to identify pushing behavior in some scenarios. In this article, we propose a hybrid deep learning and visualization framework that aims to assist researchers in automatically identifying pushing behavior in videos. The proposed framework comprises two main components: (i) Deep optical flow and wheel visualization; to generate motion information maps. (ii) A combination of an EfficientNet-B0-based classifier and a false reduction algorithm for detecting pushing behavior at the video patch level. In addition to the framework, we present a new patch-based approach to enlarge the data and alleviate the class imbalance problem in small-scale pushing behavior datasets. Experimental results (using real-world ground truth of pushing behavior videos) demonstrate that the proposed framework achieves an 86% accuracy rate. Moreover, the EfficientNet-B0-based classifier outperforms baseline CNN-based classifiers in terms of accuracy.

摘要

拥挤的活动入口可能会威胁到行人和其他人的舒适和安全,尤其是当一些行人推搡他人或利用人群中的缝隙更快地进入活动时。研究和理解推挤动力学可以导致设计和建造更舒适和安全的入口。研究人员为了了解推挤动力学,观察和分析记录的视频,以手动识别推挤行为发生的时间和地点。尽管手动方法的准确性很高,但它仍然可能很耗时、繁琐,并且难以在某些情况下识别推挤行为。在本文中,我们提出了一个混合深度学习和可视化框架,旨在帮助研究人员自动识别视频中的推挤行为。所提出的框架由两个主要部分组成:(i)深度光流和车轮可视化,以生成运动信息图。(ii)基于 EfficientNet-B0 的分类器和虚假减少算法的组合,用于在视频补丁级别检测推挤行为。除了框架,我们还提出了一种新的基于补丁的方法来扩大数据,并减轻小规模推挤行为数据集的类别不平衡问题。实验结果(使用推挤行为视频的真实世界地面实况)表明,所提出的框架达到了 86%的准确率。此外,基于 EfficientNet-B0 的分类器在准确率方面优于基于 CNN 的基线分类器。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63f/9185482/efccfe5755fa/sensors-22-04040-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验