Industrial Research Institute of Robotics and Intelligent Equipment, Harbin Institute of Technology, Weihai 264209, China.
Department of Industrial Engineering, University of Houston, Houston, TX 77004, USA.
Sensors (Basel). 2019 Jan 14;19(2):303. doi: 10.3390/s19020303.
As the aging of the population becomes more severe, wheelchair-mounted robotic arms (WMRAs) are gaining an increased amount of attention. Laser pointer interactions are an attractive method enabling humans to unambiguously point out objects and pick them up. In addition, they bring about a greater sense of participation in the interaction process as an intuitive interaction mode. However, the issue of human⁻robot interactions remains to be properly tackled, and traditional laser point interactions still suffer from poor real-time performance and low accuracy amid dynamic backgrounds. In this study, combined with an advanced laser point detection method and an improved pose estimation algorithm, a laser pointer is used to facilitate the interactions between humans and a WMRA in an indoor environment. Assistive grasping using a laser selection consists of two key steps. In the first step, the images captured using an RGB-D camera are pre-processed, and then fed to a convolutional neural network (CNN) to determine the 2D coordinates of the laser point and objects within the image. Meanwhile, the centroid coordinates of the selected object are also obtained using the depth information. In this way, the object to be picked up and its location are determined. The experimental results show that the laser point can be detected with almost 100% accuracy in a complex environment. In the second step, a compound pose-estimation algorithm aiming at a sparse use of multi-view templates is applied, which consists of both coarse- and precise-matching of the target to the template objects, greatly improving the grasping performance. The proposed algorithms were implemented on a Kinova Jaco robotic arm, and the experimental results demonstrate their effectiveness. Compared with commonly accepted methods, the time consumption of the pose generation can be reduced from 5.36 to 4.43 s, and synchronously, the pose estimation error is significantly improved from 21.31% to 3.91%.
随着人口老龄化的加剧,轮椅式机械臂(WMRAs)越来越受到关注。激光指针交互是一种很有吸引力的方法,它可以使用户能够明确地指出物体并将其拾起。此外,作为一种直观的交互模式,它还带来了更大的交互参与感。然而,人机交互的问题仍有待妥善解决,传统的激光点交互在动态背景下仍然存在实时性能差和精度低的问题。在这项研究中,结合先进的激光点检测方法和改进的姿态估计算法,使用激光笔在室内环境中促进人与 WMRAs 之间的交互。使用激光选择辅助抓取包括两个关键步骤。在第一步中,对 RGB-D 相机拍摄的图像进行预处理,然后将其输入卷积神经网络(CNN),以确定图像中激光点和物体的 2D 坐标。同时,还可以使用深度信息获取所选物体的质心坐标。这样,就可以确定要拾取的物体及其位置。实验结果表明,在复杂环境中,激光点的检测准确率几乎可以达到 100%。在第二步中,应用了一种针对稀疏使用多视图模板的复合姿态估计算法,该算法包括目标与模板物体的粗匹配和精匹配,极大地提高了抓取性能。所提出的算法在 Kinova Jaco 机械臂上实现,实验结果证明了其有效性。与常用方法相比,姿态生成的时间消耗可以从 5.36 秒减少到 4.43 秒,同时姿态估计误差从 21.31%显著提高到 3.91%。