• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用深度和空间动态模型进行深度图像前景分割,用于视频监控应用。

Foreground segmentation in depth imagery using depth and spatial dynamic models for video surveillance applications.

机构信息

Grupo de Tratamiento de Imágenes, E.T.S.I de Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, Madrid 28040, Spain.

出版信息

Sensors (Basel). 2014 Jan 24;14(2):1961-87. doi: 10.3390/s140201961.

DOI:10.3390/s140201961
PMID:24469352
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC3958249/
Abstract

Low-cost systems that can obtain a high-quality foreground segmentation almost independently of the existing illumination conditions for indoor environments are very desirable, especially for security and surveillance applications. In this paper, a novel foreground segmentation algorithm that uses only a Kinect depth sensor is proposed to satisfy the aforementioned system characteristics. This is achieved by combining a mixture of Gaussians-based background subtraction algorithm with a new Bayesian network that robustly predicts the foreground/background regions between consecutive time steps. The Bayesian network explicitly exploits the intrinsic characteristics of the depth data by means of two dynamic models that estimate the spatial and depth evolution of the foreground/background regions. The most remarkable contribution is the depth-based dynamic model that predicts the changes in the foreground depth distribution between consecutive time steps. This is a key difference with regard to visible imagery,where the color/gray distribution of the foreground is typically assumed to be constant.Experiments carried out on two different depth-based databases demonstrate that the proposed combination of algorithms is able to obtain a more accurate segmentation of the foreground/background than other state-of-the art approaches.

摘要

低成本系统,可以获得一个高质量的前景分割几乎独立于现有的照明条件,为室内环境,是非常可取的,特别是对安全和监控应用。在本文中,提出了一种新的前景分割算法,仅使用 Kinect 深度传感器来满足上述系统的特点。这是通过结合混合高斯背景减法算法与一个新的贝叶斯网络,稳健地预测前景/背景区域之间的连续时间步骤。贝叶斯网络明确利用深度数据的内在特性,通过两个动态模型来估计前景/背景区域的空间和深度演变。最显著的贡献是基于深度的动态模型,预测在连续时间步骤之间的前景深度分布的变化。这是一个关键的区别,与可见的图像,其中的前景的颜色/灰度分布通常被认为是恒定的。实验在两个不同的基于深度的数据库上进行,结果表明,该算法的组合能够比其他最先进的方法获得更准确的前景/背景分割。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/1f0068ae1aa2/sensors-14-01961f7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/54514bf3f91f/sensors-14-01961f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/0bf36bb530af/sensors-14-01961f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/f2907b9af97f/sensors-14-01961f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/dca24e865bad/sensors-14-01961f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/5a4f114f36e8/sensors-14-01961f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/d5415acff0a8/sensors-14-01961f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/1f0068ae1aa2/sensors-14-01961f7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/54514bf3f91f/sensors-14-01961f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/0bf36bb530af/sensors-14-01961f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/f2907b9af97f/sensors-14-01961f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/dca24e865bad/sensors-14-01961f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/5a4f114f36e8/sensors-14-01961f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/d5415acff0a8/sensors-14-01961f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/223f/3958249/1f0068ae1aa2/sensors-14-01961f7.jpg

相似文献

1
Foreground segmentation in depth imagery using depth and spatial dynamic models for video surveillance applications.使用深度和空间动态模型进行深度图像前景分割,用于视频监控应用。
Sensors (Basel). 2014 Jan 24;14(2):1961-87. doi: 10.3390/s140201961.
2
Improving Video Segmentation by Fusing Depth Cues and the Visual Background Extractor (ViBe) Algorithm.通过融合深度线索和视觉背景提取器(ViBe)算法来改进视频分割。
Sensors (Basel). 2017 May 21;17(5):1177. doi: 10.3390/s17051177.
3
Moving-object segmentation using a foreground history map.使用前景历史图的运动目标分割
J Opt Soc Am A Opt Image Sci Vis. 2010 Feb 1;27(2):180-7. doi: 10.1364/JOSAA.27.000180.
4
Foreground Detection Based on Superpixel and Semantic Segmentation.基于超像素和语义分割的前景检测。
Comput Intell Neurosci. 2022 Aug 31;2022:4331351. doi: 10.1155/2022/4331351. eCollection 2022.
5
A novel recursive Bayesian learning-based method for the efficient and accurate segmentation of video with dynamic background.一种基于新颖递归贝叶斯学习的方法,用于高效准确地分割具有动态背景的视频。
IEEE Trans Image Process. 2012 Sep;21(9):3865-76. doi: 10.1109/TIP.2012.2199504. Epub 2012 May 15.
6
Depth-color fusion strategy for 3-D scene modeling with Kinect.基于 Kinect 的三维场景建模的深度-颜色融合策略。
IEEE Trans Cybern. 2013 Dec;43(6):1560-71. doi: 10.1109/TCYB.2013.2271112.
7
Testing dataset for head segmentation accuracy for the algorithms in the 'BGSLibrary' v3.0.0 developed by Andrews Sobral.用于测试由安德鲁斯·索布拉尔开发的“BGSLibrary”v3.0.0中算法的头部分割准确性的测试数据集。
Data Brief. 2020 Oct 8;33:106385. doi: 10.1016/j.dib.2020.106385. eCollection 2020 Dec.
8
FarSeg++: Foreground-Aware Relation Network for Geospatial Object Segmentation in High Spatial Resolution Remote Sensing Imagery.FarSeg++:用于高空间分辨率遥感影像中地理空间目标分割的前景感知关系网络
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13715-13729. doi: 10.1109/TPAMI.2023.3296757. Epub 2023 Oct 3.
9
Deep Features Homography Transformation Fusion Network-A Universal Foreground Segmentation Algorithm for PTZ Cameras and a Comparative Study.深度特征单应性变换融合网络——一种用于云台摄像机的通用前景分割算法及比较研究
Sensors (Basel). 2020 Jun 17;20(12):3420. doi: 10.3390/s20123420.
10
Robust Online Matrix Factorization for Dynamic Background Subtraction.稳健的在线矩阵分解用于动态背景减除。
IEEE Trans Pattern Anal Mach Intell. 2018 Jul;40(7):1726-1740. doi: 10.1109/TPAMI.2017.2732350. Epub 2017 Jul 27.

引用本文的文献

1
Efficient Depth Enhancement Using a Combination of Color and Depth Information.结合颜色和深度信息实现高效深度增强
Sensors (Basel). 2017 Jul 1;17(7):1544. doi: 10.3390/s17071544.
2
Improving Video Segmentation by Fusing Depth Cues and the Visual Background Extractor (ViBe) Algorithm.通过融合深度线索和视觉背景提取器(ViBe)算法来改进视频分割。
Sensors (Basel). 2017 May 21;17(5):1177. doi: 10.3390/s17051177.
3
Three-dimensional object motion and velocity estimation using a single computational RGB-D camera.使用单个计算型RGB-D相机进行三维物体运动和速度估计。

本文引用的文献

1
Depth-color fusion strategy for 3-D scene modeling with Kinect.基于 Kinect 的三维场景建模的深度-颜色融合策略。
IEEE Trans Cybern. 2013 Dec;43(6):1560-71. doi: 10.1109/TCYB.2013.2271112.
2
Background subtraction based on color and depth using active sensors.基于主动传感器的颜色和深度的背景减除。
Sensors (Basel). 2013 Jul 12;13(7):8895-915. doi: 10.3390/s130708895.
3
On the use of simple geometric descriptors provided by RGB-D sensors for re-identification.基于 RGB-D 传感器提供的简单几何描述符进行再识别。
Sensors (Basel). 2015 Jan 8;15(1):995-1007. doi: 10.3390/s150100995.
4
Sensors and technologies in Spain: state-of-the-art.西班牙的传感器与技术:最新进展
Sensors (Basel). 2014 Aug 19;14(8):15282-303. doi: 10.3390/s140815282.
Sensors (Basel). 2013 Jun 27;13(7):8222-38. doi: 10.3390/s130708222.
4
Enhanced computer vision with Microsoft Kinect sensor: a review.增强计算机视觉的微软 Kinect 传感器:综述。
IEEE Trans Cybern. 2013 Oct;43(5):1318-34. doi: 10.1109/TCYB.2013.2265378. Epub 2013 Jun 25.
5
Deciphering the crowd: modeling and identification of pedestrian group motion.破译人群:行人团体运动的建模与识别。
Sensors (Basel). 2013 Jan 14;13(1):875-97. doi: 10.3390/s130100875.
6
A coded aperture compressive imaging array and its visual detection and tracking algorithms for surveillance systems.编码孔径压缩成像阵列及其在监控系统中的视觉检测与跟踪算法。
Sensors (Basel). 2012 Oct 29;12(11):14397-415. doi: 10.3390/s121114397.
7
Accuracy and resolution of Kinect depth data for indoor mapping applications.用于室内制图应用的 Kinect 深度数据的准确性和分辨率。
Sensors (Basel). 2012;12(2):1437-54. doi: 10.3390/s120201437. Epub 2012 Feb 1.
8
ViBe: a universal background subtraction algorithm for video sequences.ViBe:一种适用于视频序列的通用背景减除算法。
IEEE Trans Image Process. 2011 Jun;20(6):1709-24. doi: 10.1109/TIP.2010.2101613. Epub 2010 Dec 23.
9
A self-organizing approach to background subtraction for visual surveillance applications.一种用于视觉监控应用的背景减除自组织方法。
IEEE Trans Image Process. 2008 Jul;17(7):1168-77. doi: 10.1109/TIP.2008.924285.
10
A texture-based method for modeling the background and detecting moving objects.一种基于纹理的背景建模与运动目标检测方法。
IEEE Trans Pattern Anal Mach Intell. 2006 Apr;28(4):657-62. doi: 10.1109/TPAMI.2006.68.