DeGol Joseph, Akhtar Aadeel, Manja Bhargava, Bretl Timothy
University of Illinois, Urbana, IL 61801, USA.
Annu Int Conf IEEE Eng Med Biol Soc. 2016 Aug;2016:431-434. doi: 10.1109/EMBC.2016.7590732.
In this paper, we demonstrate how automatic grasp selection can be achieved by placing a camera in the palm of a prosthetic hand and training a convolutional neural network on images of objects with corresponding grasp labels. Our labeled dataset is built from common graspable objects curated from the ImageNet dataset and from images captured from our own camera that is placed in the hand. We achieve a grasp classification accuracy of 93.2% and show through real-time grasp selection that using a camera to augment current electromyography controlled prosthetic hands may be useful.
在本文中,我们展示了如何通过将摄像头置于假手掌心,并在带有相应抓取标签的物体图像上训练卷积神经网络来实现自动抓取选择。我们的带标签数据集由从ImageNet数据集中挑选出的常见可抓取物体以及从置于手中的我们自己的摄像头拍摄的图像构建而成。我们实现了93.2%的抓取分类准确率,并通过实时抓取选择表明,使用摄像头增强当前的肌电控制假手可能会很有用。