Suppr超能文献

学习潜在动作以控制辅助机器人。

Learning latent actions to control assistive robots.

作者信息

Losey Dylan P, Jeon Hong Jun, Li Mengxi, Srinivasan Krishnan, Mandlekar Ajay, Garg Animesh, Bohg Jeannette, Sadigh Dorsa

机构信息

Mechanical Engineering Department, Virginia Tech, Blacksburg, USA.

Computer Science Department, Stanford University, Stanford, USA.

出版信息

Auton Robots. 2022;46(1):115-147. doi: 10.1007/s10514-021-10005-w. Epub 2021 Aug 4.

Abstract

Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and ; however, the interfaces people must use to control their robots are . Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Today's robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robot's motion in the - plane, in another mode the joystick controls the robot's - motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu, and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by the robot's high-dimensional actions into low-dimensional and human-controllable . We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis.

摘要

辅助机器人手臂使残疾人能够独立完成日常任务。这些手臂很灵巧,但是,人们必须用来控制机器人的界面却很 。考虑用一个2自由度的操纵杆远程操作一个7自由度的机器人手臂。机器人在帮你吃晚餐,目前你想切一块豆腐。如今的机器人假定操纵杆输入和机器人动作之间有一个预定义的映射:在一种模式下,操纵杆控制机器人在 - 平面内的运动,在另一种模式下,操纵杆控制机器人的 - 运动,等等。但这种映射忽略了你正在尝试执行的任务!理想情况下,一个操纵杆轴应该控制机器人如何刺向豆腐,另一个轴应该控制不同的切割动作。我们的见解是,通过将机器人的高维动作分解为低维和人类可控制的 ,我们可以实现对辅助机器人直观、用户友好的控制。我们将这个过程分为三个部分。首先,我们探索从离线任务演示中学习潜在动作的模型,并形式化潜在动作应满足的属性。接下来,我们将学习到的潜在动作与自主机器人辅助相结合,以帮助用户实现并维持他们的高级目标。最后,我们学习操纵杆输入和潜在动作之间的个性化对齐模型。我们在四项用户研究中评估了我们的最终方法,在这些研究中,非残疾参与者抓取棉花糖、制作苹果派、切豆腐和组装甜点。然后,我们用两名每天使用辅助设备的残疾成年人测试了我们的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/436f/8335729/4fa9b9fe1e32/10514_2021_10005_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验