Suppr超能文献

基于行人扭量的多摄像机外参自动标定。

Automatic Multi-Camera Extrinsic Parameter Calibration Based on Pedestrian Torsors.

机构信息

TELIN-IPI, Ghent University-imec, St-Pietersnieuwstraat 41, B-9000 Gent, Belgium.

ETRO Department, Vrije Universiteit Brussel-imec, Pleinlaan 2, B-1050 Brussels, Belgium.

出版信息

Sensors (Basel). 2019 Nov 15;19(22):4989. doi: 10.3390/s19224989.

Abstract

Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel, accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. The proposed method considers the pedestrians in the observed scene as the calibration objects and analyzes the pedestrian tracks to obtain extrinsic parameters. Compared to the state of the art, the new method is fully automatic and robust in various environments. Our method detect human poses in the camera images and then models walking persons as vertical sticks. We apply a brute-force method to determines the correspondence between persons in multiple camera images. This information along with 3D estimated locations of the top and the bottom of the pedestrians are then used to compute the extrinsic calibration matrices. We also propose a novel method to calibrate the camera network by only using the top and centerline of the person when the bottom of the person is not available in heavily occluded scenes. We verified the robustness of the method in different camera setups and for both single and multiple walking people. The results show that the triangulation error of a few centimeters can be obtained. Typically, it requires less than one minute of observing the walking people to reach this accuracy in controlled environments. It also just takes a few minutes to collect enough data for the calibration in uncontrolled environments. Our proposed method can perform well in various situations such as multi-person, occlusions, or even at real intersections on the street.

摘要

外参标定对于相机网络中的任何计算机视觉任务都是至关重要的。通常,研究人员会在场景中放置一个标定物来标定相机网络中的所有相机。但是,在现场安装相机时,这种方法可能成本高昂且不切实际,特别是在需要重新标定的情况下。本文提出了一种新颖、准确且完全自动的具有部分重叠视场的相机网络外参标定框架。所提出的方法将观测场景中的行人视为标定物,并分析行人轨迹以获取外参。与现有技术相比,新方法在各种环境下都是完全自动和鲁棒的。我们的方法在相机图像中检测到人体姿势,然后将行走的人建模为垂直的棍子。我们应用一种暴力方法来确定多个相机图像中人员之间的对应关系。然后,使用此信息以及行人的三维估计顶部和底部位置来计算外参标定矩阵。当行人的底部在严重遮挡的场景中不可用时,我们还提出了一种仅使用行人的顶部和中心线来标定相机网络的新方法。我们在不同的相机设置和单人和多人行走的情况下验证了该方法的鲁棒性。结果表明,可以获得几厘米的三角测量误差。通常,在受控环境下,只需观察几分钟的行人即可达到这种精度。在不受控制的环境中,只需几分钟即可收集足够的数据进行标定。我们提出的方法可以在各种情况下表现良好,例如多人、遮挡,甚至是真实街道上的十字路口。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aec3/6891296/44fb6bb5fb59/sensors-19-04989-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验