Zhang Yang, Xie Lihua, Li Yuheng, Li Yuan
China Tobacco Sichuan Industrial Co., Ltd, Chengdu, Sichuan, China.
Qinhuangdao Tobacco Machinery Co., Ltd, Qinhuangdao, Hebei, China.
Front Comput Neurosci. 2023 Feb 20;17:1110889. doi: 10.3389/fncom.2023.1110889. eCollection 2023.
Object detection and grasp detection are essential for unmanned systems working in cluttered real-world environments. Detecting grasp configurations for each object in the scene would enable reasoning manipulations. However, finding the relationships between objects and grasp configurations is still a challenging problem. To achieve this, we propose a novel neural learning approach, namely SOGD, to predict a best grasp configuration for each detected objects from an RGB-D image. The cluttered background is first filtered out via a 3D-plane-based approach. Then two separate branches are designed to detect objects and grasp candidates, respectively. The relationship between object proposals and grasp candidates are learned by an additional alignment module. A series of experiments are conducted on two public datasets (Cornell Grasp Dataset and Jacquard Dataset) and the results demonstrate the superior performance of our SOGD against SOTA methods in predicting reasonable grasp configurations "from a cluttered scene."
目标检测和抓取检测对于在杂乱的现实世界环境中工作的无人系统至关重要。检测场景中每个物体的抓取配置将有助于进行推理操作。然而,找到物体与抓取配置之间的关系仍然是一个具有挑战性的问题。为了实现这一目标,我们提出了一种新颖的神经学习方法,即SOGD,用于从RGB-D图像中为每个检测到的物体预测最佳抓取配置。首先通过基于3D平面的方法滤除杂乱的背景。然后设计两个独立的分支分别检测物体和抓取候选对象。通过一个额外的对齐模块学习物体提议和抓取候选对象之间的关系。我们在两个公共数据集(康奈尔抓取数据集和提花织物数据集)上进行了一系列实验,结果表明我们的SOGD在从“杂乱场景”中预测合理抓取配置方面优于现有最佳方法。