Suppr超能文献

基于 3-D 目标的分布式智能相机网络定位。

3-D target-based distributed smart camera network localization.

机构信息

Department of Computer Science, Portland State University, Portland, OR 97207, USA.

出版信息

IEEE Trans Image Process. 2010 Oct;19(10):2530-9. doi: 10.1109/TIP.2010.2062032. Epub 2010 Jul 29.

Abstract

For distributed smart camera networks to perform vision-based tasks such as subject recognition and tracking, every camera's position and orientation relative to a single 3-D coordinate frame must be accurately determined. In this paper, we present a new camera network localization solution that requires successively showing a 3-D feature point-rich target to all cameras, then using the known geometry of a 3-D target, cameras estimate and decompose projection matrices to compute their position and orientation relative to the coordinatization of the 3-D target's feature points. As each 3-D target position establishes a distinct coordinate frame, cameras that view more than one 3-D target position compute translations and rotations relating different positions' coordinate frames and share the transform data with neighbors to facilitate realignment of all cameras to a single coordinate frame. Compared to other localization solutions that use opportunistically found visual data, our solution is more suitable to battery-powered, processing-constrained camera networks because it requires communication only to determine simultaneous target viewings and for passing transform data. Additionally, our solution requires only pairwise view overlaps of sufficient size to see the 3-D target and detect its feature points, while also giving camera positions in meaningful units. We evaluate our algorithm in both real and simulated smart camera networks. In the real network, position error is less than 1 ('') when the 3-D target's feature points fill only 2.9% of the frame area.

摘要

为了让分布式智能相机网络能够执行基于视觉的任务,如目标识别和跟踪,每台相机相对于单个 3D 坐标框架的位置和方向都必须精确确定。在本文中,我们提出了一种新的相机网络定位解决方案,该解决方案要求依次向所有相机展示一个 3D 特征点丰富的目标,然后使用已知的 3D 目标几何形状,相机估计和解构投影矩阵,以计算它们相对于 3D 目标特征点坐标化的位置和方向。由于每个 3D 目标位置都建立了一个独特的坐标框架,因此查看多个 3D 目标位置的相机可以计算不同位置的坐标框架之间的平移和旋转,并与邻居共享变换数据,以促进所有相机到单个坐标框架的重新对齐。与其他使用机会发现的视觉数据的定位解决方案相比,我们的解决方案更适合电池供电、处理受限的相机网络,因为它只需要通信来确定同时的目标视图,并传递变换数据。此外,我们的解决方案只需要足够大的两两视图重叠,以看到 3D 目标并检测其特征点,同时还提供有意义的相机位置。我们在真实和模拟的智能相机网络中评估了我们的算法。在真实网络中,当 3D 目标的特征点仅填充框架区域的 2.9%时,位置误差小于 1('')。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验