Song Qisong, Li Shaobo, Bai Qiang, Yang Jing, Zhang Xingxing, Li Zhiang, Duan Zhongjing
College of Mechanical Engineering, Guizhou University, Guiyang 550025, China.
State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China.
Micromachines (Basel). 2021 Oct 20;12(11):1273. doi: 10.3390/mi12111273.
In the industrial field, the anthropomorphism of grasping robots is the trend of future development, however, the basic vision technology adopted by the grasping robot at this stage has problems such as inaccurate positioning and low recognition efficiency. Based on this practical problem, in order to achieve more accurate positioning and recognition of objects, an object detection method for grasping robot based on improved YOLOv5 was proposed in this paper. Firstly, the robot object detection platform was designed, and the wooden block image data set is being proposed. Secondly, the Eye-In-Hand calibration method was used to obtain the relative three-dimensional pose of the object. Then the network pruning method was used to optimize the YOLOv5 model from the two dimensions of network depth and network width. Finally, the hyper parameter optimization was carried out. The simulation results show that the improved YOLOv5 network proposed in this paper has better object detection performance. The specific performance is that the recognition precision, recall, mAP value and F1 score are 99.35%, 99.38%, 99.43% and 99.41% respectively. Compared with the original YOLOv5s, YOLOv5m and YOLOv5l models, the mAP of the YOLOv5_ours model has increased by 1.12%, 1.2% and 1.27%, respectively, and the scale of the model has been reduced by 10.71%, 70.93% and 86.84%, respectively. The object detection experiment has verified the feasibility of the method proposed in this paper.
在工业领域,抓取机器人的拟人化是未来的发展趋势,然而,现阶段抓取机器人所采用的基础视觉技术存在定位不准确、识别效率低等问题。基于这一实际问题,为了实现对物体更精确的定位和识别,本文提出了一种基于改进YOLOv5的抓取机器人目标检测方法。首先,设计了机器人目标检测平台,并构建了木块图像数据集。其次,采用手眼标定方法获取物体的相对三维姿态。然后,使用网络剪枝方法从网络深度和网络宽度两个维度对YOLOv5模型进行优化。最后,进行超参数优化。仿真结果表明,本文提出的改进YOLOv5网络具有更好的目标检测性能。具体性能为识别精度、召回率、平均精度均值(mAP)值和F1分数分别为99.35%、99.38%、99.43%和99.41%。与原始的YOLOv5s、YOLOv5m和YOLOv5l模型相比,YOLOv5_ours模型的mAP分别提高了1.12%、1.2%和1.27%,且模型规模分别减小了10.71%、70.93%和86.84%。目标检测实验验证了本文所提方法的可行性。