Wang Siqi, Zeng Yijie, Yu Guang, Cheng Zhen, Liu Xinwang, Zhou Sihang, Zhu En, Kloft Marius, Yin Jianping, Liao Qing
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):2952-2969. doi: 10.1109/TPAMI.2022.3188763. Epub 2023 Feb 3.
Existing unsupervised outlier detection (OD) solutions face a grave challenge with surging visual data like images. Although deep neural networks (DNNs) prove successful for visual data, deep OD remains difficult due to OD's unsupervised nature. This paper proposes a novel framework named E Outlier that can perform effective and end-to-end deep outlier removal. Its core idea is to introduce self-supervision into deep OD. Specifically, our major solution is to adopt a discriminative learning paradigm that creates multiple pseudo classes from given unlabeled data by various data operations, which enables us to apply prevalent discriminative DNNs (e.g., ResNet) to the unsupervised OD problem. Then, with theoretical and empirical demonstration, we argue that inlier priority, a property that encourages DNN to prioritize inliers during self-supervised learning, makes it possible to perform end-to-end OD. Meanwhile, unlike frequently-used outlierness measures (e.g., density, proximity) in previous OD methods, we explore network uncertainty and validate it as a highly effective outlierness measure, while two practical score refinement strategies are also designed to improve OD performance. Finally, in addition to the discriminative learning paradigm above, we also explore the solutions that exploit other learning paradigms (i.e., generative learning and contrastive learning) to introduce self-supervision for E Outlier. Such extendibility not only brings further performance gain on relatively difficult datasets, but also enables E Outlier to be applied to other OD applications like video abnormal event detection. Extensive experiments demonstrate that E Outlier can considerably outperform state-of-the-art counterparts by 10%-30% AUROC. Demo codes are available at https://github.com/demonzyj56/E3Outlier.
现有的无监督异常检测(OD)解决方案在面对如图像等激增的视觉数据时面临严峻挑战。尽管深度神经网络(DNN)在视觉数据方面已被证明是成功的,但由于OD的无监督性质,深度OD仍然困难。本文提出了一种名为E Outlier的新颖框架,它可以执行有效的端到端深度异常去除。其核心思想是将自监督引入深度OD。具体而言,我们的主要解决方案是采用一种判别式学习范式,通过各种数据操作从给定的未标记数据中创建多个伪类,这使我们能够将流行的判别式DNN(例如ResNet)应用于无监督OD问题。然后,通过理论和实证证明,我们认为内点优先级,即在自监督学习过程中鼓励DNN优先考虑内点的一种属性,使得端到端OD成为可能。同时,与先前OD方法中常用的异常性度量(例如密度、接近度)不同,我们探索网络不确定性并将其验证为一种高效的异常性度量,同时还设计了两种实用的分数细化策略来提高OD性能。最后,除了上述判别式学习范式之外,我们还探索利用其他学习范式(即生成式学习和对比学习)为E Outlier引入自监督的解决方案。这种可扩展性不仅在相对困难的数据集上带来了进一步的性能提升,还使E Outlier能够应用于其他OD应用,如视频异常事件检测。大量实验表明,E Outlier在AUROC方面可显著优于现有最先进方法10%-30%。演示代码可在https://github.com/demonzyj56/E3Outlier获取。