Gregori Valentina, Cognolato Matteo, Saetta Gianluca, Atzori Manfredo, Gijsberts Arjan
Department of Computer, Control, and Management Engineering, University of Rome La Sapienza, Rome, Italy.
VANDAL Laboratory, Istituto Italiano di Tecnologia, Genoa, Italy.
Front Bioeng Biotechnol. 2019 Nov 15;7:316. doi: 10.3389/fbioe.2019.00316. eCollection 2019.
Visual attention is often predictive for future actions in humans. In manipulation tasks, the eyes tend to fixate an object of interest even before the reach-to-grasp is initiated. Some recent studies have proposed to exploit this anticipatory gaze behavior to improve the control of dexterous upper limb prostheses. This requires a detailed understanding of visuomotor coordination to determine in which temporal window gaze may provide helpful information. In this paper, we verify and quantify the gaze and motor behavior of 14 transradial amputees who were asked to grasp and manipulate common household objects with their missing limb. For comparison, we also include data from 30 able-bodied subjects who executed the same protocol with their right arm. The dataset contains gaze, first person video, angular velocities of the head, and electromyography and accelerometry of the forearm. To analyze the large amount of video, we developed a procedure based on recent deep learning methods to automatically detect and segment all objects of interest. This allowed us to accurately determine the pixel distances between the gaze point, the target object, and the limb in each individual frame. Our analysis shows a clear coordination between the eyes and the limb in the reach-to-grasp phase, confirming that both intact and amputated subjects precede the grasp with their eyes by more than 500 ms. Furthermore, we note that the gaze behavior of amputees was remarkably similar to that of the able-bodied control group, despite their inability to physically manipulate the objects.
视觉注意力通常可预测人类未来的行动。在操作任务中,眼睛甚至在开始伸手抓取动作之前就倾向于注视感兴趣的物体。最近的一些研究提议利用这种预期的注视行为来改善灵巧上肢假肢的控制。这需要详细了解视觉运动协调,以确定在哪个时间窗口注视可能提供有用信息。在本文中,我们对14名经桡骨截肢者的注视和运动行为进行了验证和量化,这些截肢者被要求用缺失的肢体抓取和操作常见的家居用品。为了进行比较,我们还纳入了30名健全受试者用右臂执行相同实验方案的数据。该数据集包含注视、第一人称视频、头部角速度以及前臂的肌电图和加速度测量数据。为了分析大量视频,我们基于最近的深度学习方法开发了一种程序,以自动检测和分割所有感兴趣的物体。这使我们能够准确确定每个单独帧中注视点、目标物体和肢体之间的像素距离。我们的分析表明,在伸手抓取阶段,眼睛和肢体之间存在明显的协调,证实健全受试者和截肢受试者在抓取动作之前眼睛注视的时间都超过500毫秒。此外,我们注意到,尽管截肢者无法实际操作物体,但其注视行为与健全对照组非常相似。