• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

无监督对抗网络视频传感器。

Context-Unsupervised Adversarial Network for Video Sensors.

机构信息

Image Processing Group, Department of Signal Theory and Communications, Universitat Politècnica de Catalunya (UPC), 08034 Barcelona, Spain.

出版信息

Sensors (Basel). 2022 Apr 21;22(9):3171. doi: 10.3390/s22093171.

DOI:10.3390/s22093171
PMID:35590863
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9102692/
Abstract

Foreground object segmentation is a crucial first step for surveillance systems based on networks of video sensors. This problem in the context of dynamic scenes has been widely explored in the last two decades, but it still has open research questions due to challenges such as strong shadows, background clutter and illumination changes. After years of solid work based on statistical background pixel modeling, most current proposals use convolutional neural networks (CNNs) either to model the background or to make the foreground/background decision. Although these new techniques achieve outstanding results, they usually require specific training for each scene, which is unfeasible if we aim at designing software for embedded video systems and smart cameras. Our approach to the problem does not require specific context or scene training, and thus no manual labeling. We propose a network for a refinement step on top of conventional state-of-the-art background subtraction systems. By using a statistical technique to produce a rough mask, we do not need to train the network for each scene. The proposed method can take advantage of the specificity of the classic techniques, while obtaining the highly accurate segmentation that a deep learning system provides. We also show the advantage of using an adversarial network to improve the generalization ability of the network and produce more consistent results than an equivalent non-adversarial network. The results provided were obtained by training the network on a common database, without fine-tuning for specific scenes. Experiments on the unseen part of the CDNet database provided 0.82 a F-score, and 0.87 was achieved for LASIESTA databases, which is a database unrelated to the training one. On this last database, the results outperformed by 8.75% those available in the official table. The results achieved for CDNet are well above those of the methods not based on CNNs, and according to the literature, among the best for the context-unsupervised CNNs systems.

摘要

前景目标分割是基于视频传感器网络的监控系统的关键第一步。在过去的二十年中,针对动态场景的这一问题已经得到了广泛的探索,但由于强阴影、背景杂乱和光照变化等挑战,它仍然存在尚未解决的研究问题。在基于统计背景像素建模的多年扎实工作之后,大多数当前的方案要么使用卷积神经网络(CNN)来建模背景,要么做出前景/背景决策。尽管这些新技术取得了出色的成果,但它们通常需要针对每个场景进行特定的培训,如果我们的目标是为嵌入式视频系统和智能相机设计软件,那么这是不可行的。我们解决这个问题的方法不需要特定的上下文或场景培训,因此也不需要手动标记。我们提出了一种网络,用于在传统的最先进的背景减除系统之上进行细化步骤。通过使用统计技术生成粗略的掩模,我们不需要针对每个场景训练网络。所提出的方法可以利用经典技术的特异性,同时获得深度学习系统提供的高度准确的分割。我们还展示了使用对抗网络来提高网络的泛化能力并产生比等效的非对抗网络更一致的结果的优势。所提供的结果是通过在通用数据库上训练网络获得的,而无需针对特定场景进行微调。在 CDNet 数据库的未观察部分上进行的实验提供了 0.82 的 F 分数,而对于与训练数据库无关的 LASIESTA 数据库则达到了 0.87。在最后一个数据库上,结果比官方表格中提供的结果高出 8.75%。所取得的结果明显优于那些不基于 CNN 的方法,并且根据文献,在无上下文监督的 CNN 系统中,这些结果是最好的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/09b3df03d165/sensors-22-03171-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/dede0edefb5c/sensors-22-03171-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/753f0e2222f3/sensors-22-03171-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/0fa830025377/sensors-22-03171-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/d5c414973b83/sensors-22-03171-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/f9fbeb5039f0/sensors-22-03171-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/f53c6a18b7ef/sensors-22-03171-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/09b3df03d165/sensors-22-03171-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/dede0edefb5c/sensors-22-03171-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/753f0e2222f3/sensors-22-03171-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/0fa830025377/sensors-22-03171-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/d5c414973b83/sensors-22-03171-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/f9fbeb5039f0/sensors-22-03171-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/f53c6a18b7ef/sensors-22-03171-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dd19/9102692/09b3df03d165/sensors-22-03171-g007.jpg

相似文献

1
Context-Unsupervised Adversarial Network for Video Sensors.无监督对抗网络视频传感器。
Sensors (Basel). 2022 Apr 21;22(9):3171. doi: 10.3390/s22093171.
2
Structural inference embedded adversarial networks for scene parsing.结构嵌入对抗网络的场景解析。
PLoS One. 2018 Apr 12;13(4):e0195114. doi: 10.1371/journal.pone.0195114. eCollection 2018.
3
Deep neural network concepts for background subtraction:A systematic review and comparative evaluation.基于深度神经网络的背景减除技术:系统综述与对比评估。
Neural Netw. 2019 Sep;117:8-66. doi: 10.1016/j.neunet.2019.04.024. Epub 2019 May 15.
4
S-CUDA: Self-cleansing unsupervised domain adaptation for medical image segmentation.S-CUDA:用于医学图像分割的自清洁无监督域适应
Med Image Anal. 2021 Dec;74:102214. doi: 10.1016/j.media.2021.102214. Epub 2021 Aug 12.
5
Catheter segmentation in X-ray fluoroscopy using synthetic data and transfer learning with light U-nets.基于合成数据和轻量级 U 型网络的迁移学习在 X 射线透视下的导管分割
Comput Methods Programs Biomed. 2020 Aug;192:105420. doi: 10.1016/j.cmpb.2020.105420. Epub 2020 Feb 29.
6
Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images.基于对抗训练的形状约束全卷积 DenseNet 用于头颈部 CT 和低场 MR 图像多器官分割。
Med Phys. 2019 Jun;46(6):2669-2682. doi: 10.1002/mp.13553. Epub 2019 May 6.
7
Deep Features Homography Transformation Fusion Network-A Universal Foreground Segmentation Algorithm for PTZ Cameras and a Comparative Study.深度特征单应性变换融合网络——一种用于云台摄像机的通用前景分割算法及比较研究
Sensors (Basel). 2020 Jun 17;20(12):3420. doi: 10.3390/s20123420.
8
Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks.基于级联卷积对抗网络的腹部多器官分割。
Artif Intell Med. 2021 Jul;117:102109. doi: 10.1016/j.artmed.2021.102109. Epub 2021 May 14.
9
Image generation by GAN and style transfer for agar plate image segmentation.基于 GAN 和风格迁移的琼脂平板图像分割的图像生成。
Comput Methods Programs Biomed. 2020 Feb;184:105268. doi: 10.1016/j.cmpb.2019.105268. Epub 2019 Dec 17.
10
Boundary-Weighted Domain Adaptive Neural Network for Prostate MR Image Segmentation.基于边界加权域自适应神经网络的前列腺磁共振图像分割。
IEEE Trans Med Imaging. 2020 Mar;39(3):753-763. doi: 10.1109/TMI.2019.2935018. Epub 2019 Aug 13.

引用本文的文献

1
Analytics and Applications of Audio and Image Sensing Techniques.音频和图像感应技术的分析与应用。
Sensors (Basel). 2022 Nov 3;22(21):8443. doi: 10.3390/s22218443.

本文引用的文献

1
3DCD: Scene Independent End-to-End Spatiotemporal Feature Learning Framework for Change Detection in Unseen Videos.3DCD:用于未知视频变化检测的场景无关端到端时空特征学习框架
IEEE Trans Image Process. 2021;30:546-558. doi: 10.1109/TIP.2020.3037472. Epub 2020 Nov 24.
2
Deep neural network concepts for background subtraction:A systematic review and comparative evaluation.基于深度神经网络的背景减除技术:系统综述与对比评估。
Neural Netw. 2019 Sep;117:8-66. doi: 10.1016/j.neunet.2019.04.024. Epub 2019 May 15.
3
Background-Foreground Modeling Based on Spatiotemporal Sparse Subspace Clustering.
基于时空稀疏子空间聚类的背景-前景建模。
IEEE Trans Image Process. 2017 Dec;26(12):5840-5854. doi: 10.1109/TIP.2017.2746268. Epub 2017 Aug 29.
4
Background Subtraction with DirichletProcess Mixture Models.基于狄利克雷过程混合模型的背景减除。
IEEE Trans Pattern Anal Mach Intell. 2014 Apr;36(4):670-83. doi: 10.1109/TPAMI.2013.239.