Xu Chi, Chen Jiale, Yao Mengyang, Zhou Jun, Zhang Lijun, Liu Yi
School of Automation, China University of Geosciences, Wuhan 430074, China.
Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan 430074, China.
Sensors (Basel). 2020 Nov 27;20(23):6790. doi: 10.3390/s20236790.
6DoF object pose estimation is a foundation for many important applications, such as robotic grasping, automatic driving, and so on. However, it is very challenging to estimate 6DoF pose of transparent object which is commonly seen in our daily life, because the optical characteristics of transparent material lead to significant depth error which results in false estimation. To solve this problem, a two-stage approach is proposed to estimate 6DoF pose of transparent object from a single RGB-D image. In the first stage, the influence of the depth error is eliminated by transparent segmentation, surface normal recovering, and RANSAC plane estimation. In the second stage, an extended point-cloud representation is presented to accurately and efficiently estimate object pose. As far as we know, it is the first deep learning based approach which focuses on 6DoF pose estimation of transparent objects from a single RGB-D image. Experimental results show that the proposed approach can effectively estimate 6DoF pose of transparent object, and it out-performs the state-of-the-art baselines by a large margin.
六自由度物体姿态估计是许多重要应用的基础,如机器人抓取、自动驾驶等。然而,估计日常生活中常见的透明物体的六自由度姿态极具挑战性,因为透明材料的光学特性会导致显著的深度误差,从而造成错误估计。为解决这个问题,提出了一种两阶段方法,用于从单张RGB-D图像估计透明物体的六自由度姿态。在第一阶段,通过透明分割、表面法线恢复和RANSAC平面估计来消除深度误差的影响。在第二阶段,提出了一种扩展的点云表示法,以准确高效地估计物体姿态。据我们所知,这是第一种基于深度学习的方法,专注于从单张RGB-D图像估计透明物体的六自由度姿态。实验结果表明,所提出的方法能够有效地估计透明物体的六自由度姿态,并且在很大程度上优于当前的基线方法。