School of Electrical Engineering, Yanshan University, Qinhuangdao, China.
Hebei Key Laboratory of Testing and Metrology Technology and Instruments, Yanshan University, Qinhuangdao, China.
PLoS One. 2022 Jun 8;17(6):e0269175. doi: 10.1371/journal.pone.0269175. eCollection 2022.
This paper focuses on 6D pose estimation for weakly textured targets from RGB-D images. A 6D pose estimation algorithm (DOPE++) based on a deep neural network for weakly textured objects is proposed to solve the poor real-time pose estimation and low recognition efficiency in the robot grasping process of parts with weak texture. More specifically, we first introduce the depthwise separable convolution operation to lighten the original deep object pose estimation (DOPE) network structure to improve the network operation speed. Second, an attention mechanism is introduced to improve network accuracy. In response to the low recognition efficiency of the original DOPE network for parts with occlusion relationships and the false recognition problem in recognizing parts with scales that are too large or too small, a random mask local processing method and a multiscale fusion pose estimation module are proposed. The results show that our proposed DOPE++ network improves the real-time performance of 6D pose estimation and enhances the recognition of parts at different scales without loss of accuracy. To address the problem of a single background representation of the part pose estimation dataset, a virtual dataset is constructed for data expansion to form a hybrid dataset.
本文专注于从 RGB-D 图像中对弱纹理目标进行 6D 位姿估计。针对弱纹理零件机器人抓取过程中实时位姿估计性能差、识别效率低的问题,提出了一种基于深度神经网络的弱纹理目标 6D 位姿估计算法(DOPE++)。具体来说,我们首先引入深度可分离卷积操作来减轻原始深度目标位姿估计(DOPE)网络结构,提高网络运算速度。其次,引入注意力机制提高网络精度。针对原始 DOPE 网络对遮挡关系零件识别效率低以及对尺寸过大或过小零件识别存在误识别的问题,提出了随机遮挡局部处理方法和多尺度融合位姿估计模块。实验结果表明,所提 DOPE++ 网络提高了 6D 位姿估计的实时性能,增强了对不同尺度零件的识别能力,且精度无损失。针对零件位姿估计数据集单一背景表示的问题,构建虚拟数据集进行数据扩充,形成混合数据集。