Zhang Yueyuan, Ghosh Arpan, An Yechan, Joo Kyeongjin, Kim SangMin, Kuc Taeyong
Department of Electrical and Computer Engineering, College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea.
Sensors (Basel). 2025 Apr 16;25(8):2514. doi: 10.3390/s25082514.
We propose a novel geometry-constrained learning-based method for camera-in-hand visual servoing systems that eliminates the need for camera intrinsic parameters, depth information, and the robot's kinematic model. Our method uses a cerebellar model articulation controller (CMAC) to execute online Jacobian estimation within the control framework. Specifically, we introduce a fixed-dimension, uniform-magnitude error function based on the projective homography matrix. The fixed-dimension error function ensures a constant Jacobian size regardless of the number of feature points, thereby reducing computational complexity. By not relying on individual feature points, the approach maintains robustness even when some features are occluded. The uniform magnitude of the error vector elements simplifies neural network input normalization, thereby enhancing online training efficiency. Furthermore, we incorporate geometric constraints between feature points (such as collinearity preservation) into the network update process, ensuring that model predictions conform to the fundamental principles of projective geometry and eliminating physically impossible control outputs. Experimental and simulation results demonstrate that our approach achieves superior robustness and faster learning rates compared to other model-free image-based visual servoing methods.
我们提出了一种基于几何约束学习的新型方法,用于手持相机视觉伺服系统,该方法无需相机固有参数、深度信息和机器人运动学模型。我们的方法使用小脑模型关节控制器(CMAC)在控制框架内执行在线雅可比矩阵估计。具体而言,我们基于射影单应矩阵引入了一个固定维度、均匀幅度的误差函数。固定维度误差函数确保雅可比矩阵大小恒定,而与特征点数量无关,从而降低计算复杂度。由于不依赖于单个特征点,即使某些特征被遮挡,该方法也能保持鲁棒性。误差向量元素的均匀幅度简化了神经网络输入归一化,从而提高在线训练效率。此外,我们将特征点之间的几何约束(如共线性保持)纳入网络更新过程,确保模型预测符合射影几何的基本原理,并消除物理上不可能的控制输出。实验和仿真结果表明,与其他基于图像的无模型视觉伺服方法相比,我们的方法具有更高的鲁棒性和更快的学习速率。