Martínez-Franco Juan Camilo, Rojas-Álvarez Ariel, Tabares Alejandra, Álvarez-Martínez David, Marín-Moreno César Augusto
Department of Industrial Engineering, Universidad de los Andes, Bogota 111711, Colombia.
Integra S.A., Pereira 660003, Colombia.
Sensors (Basel). 2024 Jul 18;24(14):4662. doi: 10.3390/s24144662.
Marker-less hand-eye calibration permits the acquisition of an accurate transformation between an optical sensor and a robot in unstructured environments. Single monocular cameras, despite their low cost and modest computation requirements, present difficulties for this purpose due to their incomplete correspondence of projected coordinates. In this work, we introduce a hand-eye calibration procedure based on the rotation representations inferred by an augmented autoencoder neural network. Learning-based models that attempt to directly regress the spatial transform of objects such as the links of robotic manipulators perform poorly in the orientation domain, but this can be overcome through the analysis of the latent space vectors constructed in the autoencoding process. This technique is computationally inexpensive and can be run in real time in markedly varied lighting and occlusion conditions. To evaluate the procedure, we use a color-depth camera and perform a registration step between the predicted and the captured point clouds to measure translation and orientation errors and compare the results to a baseline based on traditional checkerboard markers.
无标记手眼校准允许在非结构化环境中获取光学传感器和机器人之间的精确变换。单目相机尽管成本低且计算要求不高,但由于其投影坐标的对应不完全,为此目的存在困难。在这项工作中,我们引入了一种基于增强自动编码器神经网络推断的旋转表示的手眼校准程序。试图直接回归诸如机器人操纵器连杆等物体空间变换的基于学习的模型在方向域表现不佳,但这可以通过分析自动编码过程中构建的潜在空间向量来克服。该技术计算成本低,并且可以在明显不同的光照和遮挡条件下实时运行。为了评估该程序,我们使用彩色深度相机,并在预测的点云和捕获的点云之间执行配准步骤,以测量平移和方向误差,并将结果与基于传统棋盘格标记的基线进行比较。