Facultad de Ingeniería, Universidad Autónoma de San Luis Potosí, San Luis Potosi 78290, Mexico.
Tecnologico de Monterrey, Escuela de Ingenieria y Ciencias, Queretaro 76130, Mexico.
Sensors (Basel). 2022 Sep 27;22(19):7323. doi: 10.3390/s22197323.
It is a challenging task to track objects moving along an unknown trajectory. Conventional model-based controllers require detailed knowledge of a robot's kinematics and the target's trajectory. Tracking precision heavily relies on kinematics to infer the trajectory. Control implementation in parallel robots is especially difficult due to their complex kinematics. Vision-based controllers are robust to uncertainties of a robot's kinematic model since they can correct end-point trajectories as error estimates become available. Robustness is guaranteed by taking the vision sensor's model into account when designing the control law. All camera space manipulation (CSM) models in the literature are position-based, where the mapping between the end effector position in the Cartesian space and sensor space is established. Such models are not appropriate for tracking moving targets because the relationship between the target and the end effector is a fixed point. The present work builds upon the literature by presenting a novel CSM velocity-based control that establishes a relationship between a movable trajectory and the end effector position. Its efficacy is shown on a Delta-type parallel robot. Three types of experiments were performed: (a) static tracking (average error of 1.09 mm); (b) constant speed linear trajectory tracking-speeds of 7, 9.5, and 12 cm/s-(tracking errors of 8.89, 11.76, and 18.65 mm, respectively); (c) freehand trajectory tracking (max tracking errors of 11.79 mm during motion and max static positioning errors of 1.44 mm once the object stopped). The resulting control cycle time was 48 ms. The results obtained show a reduction in the tracking errors for this robot with respect to previously published control strategies.
跟踪沿未知轨迹移动的物体是一项具有挑战性的任务。传统的基于模型的控制器需要详细了解机器人的运动学和目标的轨迹。跟踪精度严重依赖于运动学来推断轨迹。由于其复杂的运动学,并联机器人的控制实现尤其困难。基于视觉的控制器对机器人运动学模型的不确定性具有鲁棒性,因为它们可以在可用的误差估计时校正末端轨迹。通过在设计控制律时考虑视觉传感器的模型,可以保证鲁棒性。文献中的所有相机空间操纵 (CSM) 模型都是基于位置的,其中笛卡尔空间中的末端执行器位置和传感器空间之间的映射建立。此类模型不适合跟踪移动目标,因为目标和末端执行器之间的关系是一个固定点。本工作通过提出一种新的基于 CSM 速度的控制方法,在文献的基础上进一步研究,该方法建立了可移动轨迹与末端执行器位置之间的关系。在 Delta 型并联机器人上展示了其有效性。进行了三种类型的实验:(a)静态跟踪(平均误差为 1.09 毫米);(b)恒速线性轨迹跟踪-速度为 7、9.5 和 12 厘米/秒-(跟踪误差分别为 8.89、11.76 和 18.65 毫米);(c)徒手轨迹跟踪(运动过程中的最大跟踪误差为 11.79 毫米,物体停止后最大静态定位误差为 1.44 毫米)。所得控制循环时间为 48 毫秒。与以前发布的控制策略相比,该机器人的跟踪误差有所降低。