Wang Peiyi, Xie Zhexin, Xin Wenci, Tang Zhiqiang, Yang Xinhua, Mohanakrishnan Muralidharan, Guo Sheng, Laschi Cecilia
School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, China.
Department of Mechanical Engineering, National University of Singapore, Singapore, Singapore.
Nat Commun. 2024 Nov 18;15(1):9978. doi: 10.1038/s41467-024-54327-6.
A high-level perceptual model found in the human brain is essential to guide robotic control when facing perception-intensive interactive tasks. Soft robots with inherent softness may benefit from such mechanisms when interacting with their surroundings. Here, we propose an expected-actual perception-action loop and demonstrate the model on a sensorized soft continuum robot. By sensing and matching expected and actual shape (1.4% estimation error on average), at each perception loop, our robot system rapidly (detection within 0.4 s) and robustly detects contact and distinguishes deformation sources, whether external and internal actions are applied separately or simultaneously. We also show that our soft arm can accurately perceive contact direction in both static and dynamic configurations (error below 10°), even in interactive environments without vision. The potential of our method are demonstrated in two experimental scenarios: learning to autonomously navigate by touching the walls, and teaching and repeating desired configurations of position and force through interaction with human operators.
人类大脑中发现的高级感知模型对于指导机器人在面对感知密集型交互任务时的控制至关重要。具有固有柔软性的软机器人在与周围环境交互时可能会受益于这种机制。在这里,我们提出了一个预期-实际感知-动作循环,并在一个装有传感器的软连续体机器人上演示了该模型。通过感知并匹配预期形状和实际形状(平均估计误差为1.4%),在每个感知循环中,我们的机器人系统能够快速(在0.4秒内检测到)且稳健地检测到接触并区分变形源,无论外部和内部动作是单独应用还是同时应用。我们还表明,我们的软臂即使在没有视觉的交互环境中,也能在静态和动态配置下准确感知接触方向(误差低于10°)。我们的方法的潜力在两个实验场景中得到了证明:通过触摸墙壁学习自主导航,以及通过与人类操作员交互来教授和重复期望的位置和力配置。