Suppr超能文献

基于事件的机器人抓取检测与神经形态视觉传感器及事件抓取数据集

Event-Based Robotic Grasping Detection With Neuromorphic Vision Sensor and Event-Grasping Dataset.

作者信息

Li Bin, Cao Hu, Qu Zhongnan, Hu Yingbai, Wang Zhenke, Liang Zichen

机构信息

JingDong Group, Beijing, China.

Robotics, Artificial Intelligence and Real-time Systems, Technische Universität München, München, Germany.

出版信息

Front Neurorobot. 2020 Oct 8;14:51. doi: 10.3389/fnbot.2020.00051. eCollection 2020.

Abstract

Robotic grasping plays an important role in the field of robotics. The current state-of-the-art robotic grasping detection systems are usually built on the conventional vision, such as the RGB-D camera. Compared to traditional frame-based computer vision, neuromorphic vision is a small and young community of research. Currently, there are limited event-based datasets due to the troublesome annotation of the asynchronous event stream. Annotating large scale vision datasets often takes lots of computation resources, especially when it comes to troublesome data for video-level annotation. In this work, we consider the problem of detecting robotic grasps in a moving camera view of a scene containing objects. To obtain more agile robotic perception, a neuromorphic vision sensor () attaching to the robot gripper is introduced to explore the potential usage in grasping detection. We construct a robotic grasping dataset named dataset with 91 objects. A spatial-temporal mixed particle filter (SMP Filter) is proposed to track the LED-based grasp rectangles, which enables video-level annotation of a single grasp rectangle per object. As LEDs blink at high frequency, the dataset is annotated at a high frequency of 1 kHz. Based on the dataset, we develop a deep neural network for grasping detection that considers the angle learning problem as classification instead of regression. The method performs high detection accuracy on our dataset with 93% precision at an object-wise level split. This work provides a large-scale and well-annotated dataset and promotes the neuromorphic vision applications in agile robot.

摘要

机器人抓取在机器人技术领域发挥着重要作用。当前最先进的机器人抓取检测系统通常基于传统视觉构建,比如RGB-D相机。与传统的基于帧的计算机视觉相比,神经形态视觉是一个规模较小且新兴的研究领域。目前,由于异步事件流的标注麻烦,基于事件的数据集有限。标注大规模视觉数据集通常需要大量计算资源,尤其是对于视频级标注的麻烦数据。在这项工作中,我们考虑在包含物体的场景的移动相机视图中检测机器人抓取的问题。为了获得更敏捷的机器人感知,引入了一个附着在机器人夹具上的神经形态视觉传感器()来探索其在抓取检测中的潜在用途。我们构建了一个名为数据集的机器人抓取数据集,其中包含91个物体。提出了一种时空混合粒子滤波器(SMP滤波器)来跟踪基于LED的抓取矩形,这使得能够对每个物体的单个抓取矩形进行视频级标注。由于LED以高频闪烁,数据集以1kHz的高频进行标注。基于数据集,我们开发了一种用于抓取检测的深度神经网络,该网络将角度学习问题视为分类而非回归。该方法在我们的数据集上以物体级分割的方式实现了93%的精度,检测准确率很高。这项工作提供了一个大规模且标注良好的数据集,并推动了神经形态视觉在敏捷机器人中的应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c3c/7580650/fc08fa85429b/fnbot-14-00051-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验