• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

从深度相机传感器噪声中我们能学到什么?

What Can We Learn from Depth Camera Sensor Noise?

机构信息

Department of Computer Science, University of Haifa, Haifa 3498838, Israel.

出版信息

Sensors (Basel). 2022 Jul 21;22(14):5448. doi: 10.3390/s22145448.

DOI:10.3390/s22145448
PMID:35891124
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9321871/
Abstract

Although camera and sensor noise are often disregarded, assumed negligible or dealt with in the context of denoising, in this paper we show that significant information can actually be deduced from camera noise about the captured scene and the objects within it. Specifically, we deal with depth cameras and their noise patterns. We show that from sensor noise alone, the object's depth and location in the scene can be deduced. Sensor noise can indicate the source camera type, and within a camera type the specific device used to acquire the images. Furthermore, we show that noise distribution on surfaces provides information about the light direction within the scene as well as allows to distinguish between real and masked faces. Finally, we show that the size of depth shadows (missing depth data) is a function of the object's distance from the background, its distance from the camera and the object's size. Hence, can be used to authenticate objects location in the scene. This paper provides tools and insights into what can be learned from depth camera sensor noise.

摘要

虽然相机和传感器噪声通常被忽略、认为可以忽略不计,或者在降噪的背景下处理,但在本文中,我们展示了实际上可以从相机噪声中推断出有关拍摄场景和其中物体的重要信息。具体来说,我们处理深度相机及其噪声模式。我们表明,仅从传感器噪声就可以推断出物体的深度和在场景中的位置。传感器噪声可以指示源相机类型,并且在相机类型内,可以指示用于获取图像的特定设备。此外,我们表明,表面上的噪声分布提供了有关场景内光线方向的信息,并且可以区分真实和伪装的面部。最后,我们表明,深度阴影(缺失深度数据)的大小是物体距离背景、距离相机和物体大小的函数。因此,可以用于验证物体在场景中的位置。本文提供了从深度相机传感器噪声中可以学习到的工具和见解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/fd7f0c6516d9/sensors-22-05448-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/76bf9ceefba0/sensors-22-05448-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/a5a3bceefb81/sensors-22-05448-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/95d0171f4e3c/sensors-22-05448-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/cea4155f3dc6/sensors-22-05448-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/a8eb1332ba50/sensors-22-05448-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/361472535c35/sensors-22-05448-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/dc1f65352b03/sensors-22-05448-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/eb9bf0095047/sensors-22-05448-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/0b6549a9c3f1/sensors-22-05448-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/f508b8080c60/sensors-22-05448-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/f315101bd2c8/sensors-22-05448-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/7c83f85ac9c2/sensors-22-05448-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/ba41232bf42b/sensors-22-05448-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/be2f197a8cca/sensors-22-05448-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/02ba42c1d329/sensors-22-05448-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/a8233ca4e84d/sensors-22-05448-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/7477b968b637/sensors-22-05448-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/b311ea41d797/sensors-22-05448-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/3e21bf36f158/sensors-22-05448-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/fd7f0c6516d9/sensors-22-05448-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/76bf9ceefba0/sensors-22-05448-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/a5a3bceefb81/sensors-22-05448-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/95d0171f4e3c/sensors-22-05448-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/cea4155f3dc6/sensors-22-05448-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/a8eb1332ba50/sensors-22-05448-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/361472535c35/sensors-22-05448-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/dc1f65352b03/sensors-22-05448-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/eb9bf0095047/sensors-22-05448-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/0b6549a9c3f1/sensors-22-05448-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/f508b8080c60/sensors-22-05448-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/f315101bd2c8/sensors-22-05448-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/7c83f85ac9c2/sensors-22-05448-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/ba41232bf42b/sensors-22-05448-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/be2f197a8cca/sensors-22-05448-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/02ba42c1d329/sensors-22-05448-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/a8233ca4e84d/sensors-22-05448-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/7477b968b637/sensors-22-05448-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/b311ea41d797/sensors-22-05448-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/3e21bf36f158/sensors-22-05448-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7180/9321871/fd7f0c6516d9/sensors-22-05448-g020.jpg

相似文献

1
What Can We Learn from Depth Camera Sensor Noise?从深度相机传感器噪声中我们能学到什么?
Sensors (Basel). 2022 Jul 21;22(14):5448. doi: 10.3390/s22145448.
2
Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras.基于模糊逻辑的飞行时间相机所产生三维图像的小波去噪方法。
Opt Express. 2010 Oct 25;18(22):22651-76. doi: 10.1364/OE.18.022651.
3
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.用于详细3D室内和室外建模的增强型RGB-D映射方法
Sensors (Basel). 2016 Sep 27;16(10):1589. doi: 10.3390/s16101589.
4
Virtual mirror rendering with stationary RGB-D cameras and stored 3-D background.基于固定 RGB-D 相机和存储的 3-D 背景的虚拟镜像渲染。
IEEE Trans Image Process. 2013 Sep;22(9):3433-48. doi: 10.1109/TIP.2013.2268941. Epub 2013 Jun 14.
5
Time-of-Flight Sensor Calibration for a Color and Depth Camera Pair.用于彩色和深度相机对的飞行时间传感器校准。
IEEE Trans Pattern Anal Mach Intell. 2015 Jul;37(7):1501-13. doi: 10.1109/TPAMI.2014.2363827.
6
Beyond PRNU: Learning Robust Device-Specific Fingerprint for Source Camera Identification.超越 PRNU:学习稳健的设备特定指纹用于源相机识别。
Sensors (Basel). 2022 Oct 17;22(20):7871. doi: 10.3390/s22207871.
7
Measurement Noise Model for Depth Camera-Based People Tracking.基于深度相机的人体跟踪测量噪声模型。
Sensors (Basel). 2021 Jun 30;21(13):4488. doi: 10.3390/s21134488.
8
Can People Infer Distance in a 2D Scene Using the Visual Size and Position of an Object?人们能否利用物体的视觉大小和位置在二维场景中推断距离?
Vision (Basel). 2022 May 4;6(2):25. doi: 10.3390/vision6020025.
9
Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0.用于全光相机2.0的点扩散函数和深度不变焦点扫描点扩散函数。
Opt Express. 2017 May 1;25(9):9947-9962. doi: 10.1364/OE.25.009947.
10
Add-on optical mask to declutter visual information based on depth for visual prostheses.基于深度的视觉假体附加光学掩模以消除视觉信息杂乱。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:790-793. doi: 10.1109/EMBC48229.2022.9871030.

引用本文的文献

1
Improving 3D Reconstruction Through RGB-D Sensor Noise Modeling.通过RGB-D传感器噪声建模改进三维重建
Sensors (Basel). 2025 Feb 5;25(3):950. doi: 10.3390/s25030950.
2
Performance of Microsoft Azure Kinect DK as a tool for estimating human body segment lengths.微软 Azure Kinect DK 在估计人体分段长度方面的性能。
Sci Rep. 2024 Jul 9;14(1):15811. doi: 10.1038/s41598-024-66798-0.

本文引用的文献

1
Pulse Based Time-of-Flight Range Sensing.基于脉冲的飞行时间距离传感。
Sensors (Basel). 2018 May 23;18(6):1679. doi: 10.3390/s18061679.
2
Statistical analysis-based error models for the Microsoft Kinect(TM) depth sensor.基于统计分析的微软Kinect深度传感器误差模型
Sensors (Basel). 2014 Sep 18;14(9):17430-50. doi: 10.3390/s140917430.
3
Spatial uncertainty model for visual features using a Kinect™ sensor.基于 Kinect™ 传感器的视觉特征空间不确定性模型。
Sensors (Basel). 2012;12(7):8640-62. doi: 10.3390/s120708640. Epub 2012 Jun 26.
4
Accuracy and resolution of Kinect depth data for indoor mapping applications.用于室内制图应用的 Kinect 深度数据的准确性和分辨率。
Sensors (Basel). 2012;12(2):1437-54. doi: 10.3390/s120201437. Epub 2012 Feb 1.
5
A comparison of methods for multiclass support vector machines.多类支持向量机方法的比较
IEEE Trans Neural Netw. 2002;13(2):415-25. doi: 10.1109/72.991427.