School of Mechanical Engineerings, Hebei University of Technology, Tianjin, 300131, China.
Sci Rep. 2021 Nov 23;11(1):22744. doi: 10.1038/s41598-021-02225-y.
In the electronics industry environment, rapid recognition of objects to be grasped from digital images is essential for visual guidance of intelligent robots. However, electronic components have a small size, are difficult to distinguish, and are in motion on a conveyor belt, making target detection more difficult. For this reason, the YOLOv4-tiny method is used to detect electronic components and is improved. Then, different network structures are built for the adaptive integration of middle- and high-level features to address the phenomenon in which the original algorithm integrates all feature information indiscriminately. The method is deployed on an electronic component dataset for validation. Experimental results show that the accuracy of the original algorithm is improved from 93.74 to 98.6%. Compared with other current mainstream algorithms, such as Faster RCNN, SSD, RefineDet, EfficientDet, and YOLOv4, the method can maintain high detection accuracy at the fastest speed. The method can provide a technical reference for the development of manufacturing robots in the electronics industry.
在电子行业环境中,从数字图像中快速识别要抓取的对象对于智能机器人的视觉引导至关重要。然而,电子元件体积小、难以区分,并且在传送带上运动,这使得目标检测更加困难。为此,使用 YOLOv4-tiny 方法检测电子元件并进行改进。然后,为自适应集成中高层特征构建不同的网络结构,以解决原始算法不加区分地集成所有特征信息的现象。该方法在电子元件数据集上进行验证。实验结果表明,原始算法的准确率从 93.74%提高到了 98.6%。与其他当前主流算法,如 Faster RCNN、SSD、RefineDet、EfficientDet 和 YOLOv4 相比,该方法可以在最快的速度下保持较高的检测精度。该方法可为电子行业制造机器人的发展提供技术参考。