• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用单个计算型RGB-D相机进行三维物体运动和速度估计。

Three-dimensional object motion and velocity estimation using a single computational RGB-D camera.

作者信息

Lee Seungwon, Jeong Kyungwon, Park Jinho, Paik Joonki

机构信息

Image Processing and Intelligent Systems Laboratory Graduate School of Advanced Imaging Science, Multimedia, and Film Chung-Ang University, Seoul 156-756, Korea.

出版信息

Sensors (Basel). 2015 Jan 8;15(1):995-1007. doi: 10.3390/s150100995.

DOI:10.3390/s150100995
PMID:25580899
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC4327060/
Abstract

In this paper, a three-dimensional (3D) object moving direction and velocity estimation method is presented using a dual off-axis color-filtered aperture (DCA)-based computational camera. Conventional object tracking methods provided only two-dimensional (2D) states of an object in the image for the target representation. The proposed method estimates depth information in the object region from a single DCA camera that transforms 2D spatial information into 3D model parameters of the object. We also present a calibration method of the DCA camera to estimate the entire set of camera parameters for a practical implementation. Experimental results show that the proposed DCA-based color and depth (RGB-D) camera can calculate the 3D object moving direction and velocity of a randomly moving object in a single-camera framework.

摘要

本文提出了一种基于双离轴彩色滤光孔径(DCA)的计算相机来估计三维(3D)物体运动方向和速度的方法。传统的物体跟踪方法仅提供图像中物体的二维(2D)状态用于目标表示。所提出的方法从单个DCA相机估计物体区域中的深度信息,该相机将二维空间信息转换为物体的三维模型参数。我们还提出了一种DCA相机的校准方法,以估计用于实际应用的整套相机参数。实验结果表明,所提出的基于DCA的彩色和深度(RGB-D)相机能够在单相机框架中计算随机移动物体的三维物体运动方向和速度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/449d4fcf1746/sensors-15-00995f11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/eb5fec59a7c6/sensors-15-00995f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/dd7250a4d705/sensors-15-00995f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/7853670be62d/sensors-15-00995f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/c9cc6955cfcc/sensors-15-00995f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/7c625c60cbd9/sensors-15-00995f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/d2b01604afff/sensors-15-00995f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/606bbdb73529/sensors-15-00995f7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/34559d837bd1/sensors-15-00995f8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/7f58240560d2/sensors-15-00995f9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/05e8ae86562a/sensors-15-00995f10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/449d4fcf1746/sensors-15-00995f11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/eb5fec59a7c6/sensors-15-00995f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/dd7250a4d705/sensors-15-00995f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/7853670be62d/sensors-15-00995f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/c9cc6955cfcc/sensors-15-00995f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/7c625c60cbd9/sensors-15-00995f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/d2b01604afff/sensors-15-00995f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/606bbdb73529/sensors-15-00995f7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/34559d837bd1/sensors-15-00995f8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/7f58240560d2/sensors-15-00995f9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/05e8ae86562a/sensors-15-00995f10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eaed/4327060/449d4fcf1746/sensors-15-00995f11.jpg

相似文献

1
Three-dimensional object motion and velocity estimation using a single computational RGB-D camera.使用单个计算型RGB-D相机进行三维物体运动和速度估计。
Sensors (Basel). 2015 Jan 8;15(1):995-1007. doi: 10.3390/s150100995.
2
Distance estimation using a single computational camera with dual off-axis color filtered apertures.使用具有双离轴彩色滤光孔径的单台计算相机进行距离估计。
Opt Express. 2013 Oct 7;21(20):23116-29. doi: 10.1364/OE.21.023116.
3
Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System.用于广域视频监控系统的基于自动相机校准的目标遮挡检测
Sensors (Basel). 2016 Jun 25;16(7):982. doi: 10.3390/s16070982.
4
Multifocusing and depth estimation using a color shift model-based computational camera.基于颜色偏移模型的计算相机的多聚焦和深度估计。
IEEE Trans Image Process. 2012 Sep;21(9):4152-66. doi: 10.1109/TIP.2012.2202671. Epub 2012 Jun 8.
5
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.用于详细3D室内和室外建模的增强型RGB-D映射方法
Sensors (Basel). 2016 Sep 27;16(10):1589. doi: 10.3390/s16101589.
6
Segmentation-Based Color Channel Registration for Disparity Estimation of Dual Color-Filtered Aperture Camera.基于分割的双通道彩色滤波孔径相机视差估计的颜色通道配准。
Sensors (Basel). 2018 Sep 20;18(10):3174. doi: 10.3390/s18103174.
7
Robust Fusion of Color and Depth Data for RGB-D Target Tracking Using Adaptive Range-Invariant Depth Models and Spatio-Temporal Consistency Constraints.基于自适应范围不变深度模型和时空一致性约束的 RGB-D 目标跟踪的彩色和深度数据的鲁棒融合。
IEEE Trans Cybern. 2018 Aug;48(8):2485-2499. doi: 10.1109/TCYB.2017.2740952. Epub 2017 Sep 6.
8
A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.一种用于RGB-D相机网络的快速且稳健的外部校准方法。
Sensors (Basel). 2018 Jan 15;18(1):235. doi: 10.3390/s18010235.
9
A New Model of RGB-D Camera Calibration Based On 3D Control Field.基于 3D 控制场的 RGB-D 相机标定新模型。
Sensors (Basel). 2019 Nov 21;19(23):5082. doi: 10.3390/s19235082.
10
Multi-Cue-Based Circle Detection and Its Application to Robust Extrinsic Calibration of RGB-D Cameras.基于多线索的圆形检测及其在 RGB-D 相机鲁棒外参标定中的应用。
Sensors (Basel). 2019 Mar 29;19(7):1539. doi: 10.3390/s19071539.

引用本文的文献

1
A Self-Assessment Stereo Capture Model Applicable to the Internet of Things.一种适用于物联网的自评估立体捕捉模型。
Sensors (Basel). 2015 Aug 21;15(8):20925-44. doi: 10.3390/s150820925.

本文引用的文献

1
Thermal tracking of sports players.运动员的热追踪
Sensors (Basel). 2014 Jul 29;14(8):13679-91. doi: 10.3390/s140813679.
2
Directional joint bilateral filter for depth images.用于深度图像的定向联合双边滤波器。
Sensors (Basel). 2014 Jun 26;14(7):11362-78. doi: 10.3390/s140711362.
3
Robust pedestrian tracking and recognition from FLIR video: a unified approach via sparse coding.基于FLIR视频的稳健行人跟踪与识别:一种通过稀疏编码的统一方法。
Sensors (Basel). 2014 Jun 24;14(6):11245-59. doi: 10.3390/s140611245.
4
Foreground segmentation in depth imagery using depth and spatial dynamic models for video surveillance applications.使用深度和空间动态模型进行深度图像前景分割,用于视频监控应用。
Sensors (Basel). 2014 Jan 24;14(2):1961-87. doi: 10.3390/s140201961.
5
Distance estimation using a single computational camera with dual off-axis color filtered apertures.使用具有双离轴彩色滤光孔径的单台计算相机进行距离估计。
Opt Express. 2013 Oct 7;21(20):23116-29. doi: 10.1364/OE.21.023116.
6
Background subtraction based on color and depth using active sensors.基于主动传感器的颜色和深度的背景减除。
Sensors (Basel). 2013 Jul 12;13(7):8895-915. doi: 10.3390/s130708895.
7
Multifocusing and depth estimation using a color shift model-based computational camera.基于颜色偏移模型的计算相机的多聚焦和深度估计。
IEEE Trans Image Process. 2012 Sep;21(9):4152-66. doi: 10.1109/TIP.2012.2202671. Epub 2012 Jun 8.
8
A Bayesian framework for human body pose tracking from depth image sequences.基于深度图像序列的人体姿态跟踪贝叶斯框架。
Sensors (Basel). 2010;10(5):5280-93. doi: 10.3390/s100505280. Epub 2010 May 25.
9
Computational cameras: convergence of optics and processing.计算相机:光学与处理的融合。
IEEE Trans Image Process. 2011 Dec;20(12):3322-40. doi: 10.1109/TIP.2011.2171700. Epub 2011 Oct 18.
10
Elastic registration in the presence of intensity variations.存在强度变化时的弹性配准。
IEEE Trans Med Imaging. 2003 Jul;22(7):865-74. doi: 10.1109/TMI.2003.815069.