• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于视频异常事件检测的具有对抗训练的背景无关框架。

A Background-Agnostic Framework With Adversarial Training for Abnormal Event Detection in Video.

作者信息

Georgescu Mariana Iuliana, Ionescu Radu Tudor, Khan Fahad Shahbaz, Popescu Marius, Shah Mubarak

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):4505-4523. doi: 10.1109/TPAMI.2021.3074805. Epub 2022 Aug 4.

DOI:10.1109/TPAMI.2021.3074805
PMID:33881990
Abstract

Abnormal event detection in video is a complex computer vision problem that has attracted significant attention in recent years. The complexity of the task arises from the commonly-adopted definition of an abnormal event, that is, a rarely occurring event that typically depends on the surrounding context. Following the standard formulation of abnormal event detection as outlier detection, we propose a background-agnostic framework that learns from training videos containing only normal events. Our framework is composed of an object detector, a set of appearance and motion auto-encoders, and a set of classifiers. Since our framework only looks at object detections, it can be applied to different scenes, provided that normal events are defined identically across scenes and that the single main factor of variation is the background. This makes our method background agnostic, as we rely strictly on objects that can cause anomalies, and not on the background. To overcome the lack of abnormal data during training, we propose an adversarial learning strategy for the auto-encoders. We create a scene-agnostic set of out-of-domain pseudo-abnormal examples, which are correctly reconstructed by the auto-encoders before applying gradient ascent on the pseudo-abnormal examples. We further utilize the pseudo-abnormal examples to serve as abnormal examples when training appearance-based and motion-based binary classifiers to discriminate between normal and abnormal latent features and reconstructions. Furthermore, to ensure that the auto-encoders focus only on the main object inside each bounding box image, we introduce a branch that learns to segment the main object. We compare our framework with the state-of-the-art methods on four benchmark data sets, using various evaluation metrics. Compared to existing methods, the empirical results indicate that our approach achieves favorable performance on all data sets. In addition, we provide region-based and track-based annotations for two large-scale abnormal event detection data sets from the literature, namely ShanghaiTech and Subway.

摘要

视频中的异常事件检测是一个复杂的计算机视觉问题,近年来受到了广泛关注。该任务的复杂性源于异常事件的常用定义,即一种很少发生的事件,通常取决于周围的环境。遵循将异常事件检测作为离群值检测的标准公式,我们提出了一个与背景无关的框架,该框架从仅包含正常事件的训练视频中学习。我们的框架由一个目标检测器、一组外观和运动自动编码器以及一组分类器组成。由于我们的框架只关注目标检测,只要跨场景对正常事件的定义相同且唯一的主要变化因素是背景,它就可以应用于不同的场景。这使得我们的方法与背景无关,因为我们严格依赖可能导致异常的目标,而不是背景。为了克服训练期间缺乏异常数据的问题,我们为自动编码器提出了一种对抗学习策略。我们创建了一组与场景无关的域外伪异常示例,在对伪异常示例应用梯度上升之前,自动编码器会正确地重建这些示例。在训练基于外观和基于运动的二元分类器以区分正常和异常潜在特征及重建时,我们进一步利用伪异常示例作为异常示例。此外,为了确保自动编码器仅关注每个边界框图像内的主要目标,我们引入了一个学习分割主要目标的分支。我们使用各种评估指标,在四个基准数据集上,将我们的框架与当前最先进的方法进行了比较。与现有方法相比,实证结果表明我们的方法在所有数据集上都取得了良好的性能。此外,我们为文献中的两个大规模异常事件检测数据集,即上海科技大学数据集和地铁数据集,提供了基于区域和基于轨迹的注释。

相似文献

1
A Background-Agnostic Framework With Adversarial Training for Abnormal Event Detection in Video.一种用于视频异常事件检测的具有对抗训练的背景无关框架。
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):4505-4523. doi: 10.1109/TPAMI.2021.3074805. Epub 2022 Aug 4.
2
Statistical Hypothesis Detector for Abnormal Event Detection in Crowded Scenes.用于拥挤场景中异常事件检测的统计假设检测器。
IEEE Trans Cybern. 2017 Nov;47(11):3597-3608. doi: 10.1109/TCYB.2016.2572609. Epub 2016 Jun 13.
3
Variational Abnormal Behavior Detection With Motion Consistency.基于运动一致性的变分异常行为检测
IEEE Trans Image Process. 2022;31:275-286. doi: 10.1109/TIP.2021.3130545. Epub 2021 Dec 7.
4
Cyclic Self-Training With Proposal Weight Modulation for Cross-Supervised Object Detection.用于交叉监督目标检测的带提议权重调制的循环自训练
IEEE Trans Image Process. 2023;32:1992-2002. doi: 10.1109/TIP.2023.3261752. Epub 2023 Apr 4.
5
BMAN: Bidirectional Multi-scale Aggregation Networks for Abnormal Event Detection.BMAN:用于异常事件检测的双向多尺度聚合网络
IEEE Trans Image Process. 2019 Oct 24. doi: 10.1109/TIP.2019.2948286.
6
Abnormal Event Detection and Localization via Adversarial Event Prediction.通过对抗性事件预测进行异常事件检测与定位
IEEE Trans Neural Netw Learn Syst. 2022 Aug;33(8):3572-3586. doi: 10.1109/TNNLS.2021.3053563. Epub 2022 Aug 3.
7
Training Robust Object Detectors From Noisy Category Labels and Imprecise Bounding Boxes.从噪声类别标签和不精确边界框中训练鲁棒目标检测器。
IEEE Trans Image Process. 2021;30:5782-5792. doi: 10.1109/TIP.2021.3085208. Epub 2021 Jun 23.
8
Multi-Channel Generative Framework and Supervised Learning for Anomaly Detection in Surveillance Videos.用于监控视频异常检测的多通道生成框架与监督学习
Sensors (Basel). 2021 May 3;21(9):3179. doi: 10.3390/s21093179.
9
Pixel Objectness: Learning to Segment Generic Objects Automatically in Images and Videos.像素目标性:学习在图像和视频中自动分割通用物体
IEEE Trans Pattern Anal Mach Intell. 2019 Nov;41(11):2677-2692. doi: 10.1109/TPAMI.2018.2865794. Epub 2018 Aug 17.
10
A Novel Unsupervised Video Anomaly Detection Framework Based on Optical Flow Reconstruction and Erased Frame Prediction.基于光流重构和遮挡帧预测的新型无监督视频异常检测框架
Sensors (Basel). 2023 May 17;23(10):4828. doi: 10.3390/s23104828.

引用本文的文献

1
Anomaly Detection Based on a 3D Convolutional Neural Network Combining Convolutional Block Attention Module Using Merged Frames.基于使用合并帧的卷积块注意力模块的3D卷积神经网络的异常检测
Sensors (Basel). 2023 Dec 4;23(23):9616. doi: 10.3390/s23239616.
2
Online Video Anomaly Detection.在线视频异常检测
Sensors (Basel). 2023 Aug 26;23(17):7442. doi: 10.3390/s23177442.
3
Unsupervised Video Anomaly Detection Based on Similarity with Predefined Text Descriptions.基于与预定义文本描述的相似性的无监督视频异常检测
Sensors (Basel). 2023 Jul 9;23(14):6256. doi: 10.3390/s23146256.