Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany.
Health Robotics and Automation Laboratory, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Karlsruhe, Germany.
Surg Endosc. 2021 Sep;35(9):5365-5374. doi: 10.1007/s00464-021-08509-8. Epub 2021 Apr 27.
We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons.
The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon's learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR.
The duration of each operation decreased with the robot's increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%.
The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon's needs.
我们展示了首个适用于微创手术的自我学习、上下文敏感、自主式摄像引导机器人。目前大多数外科手术机器人都是没有自主能力的遥操作器。已经开发出用于腹腔镜摄像引导的自主系统,但是它们遵循简单的规则,并且不会根据特定任务、程序或外科医生的需求来调整其行为。
本文提出的方法允许不同的机器人运动学感知其环境,根据知识库对其进行解释,并执行上下文感知操作。在培训阶段,由一位外科医生进行了二十次手术,由人工进行摄像引导。随后,我们对认知型机器人摄像控制进行了实验评估。在完成外科医生的学习曲线后,使用来自手动摄像引导的数据对 VIKY EP 系统和 KUKA LWR 4 机器人进行了训练。其次,仅使用 VIKY EP 的数据来训练 LWR,最后使用来自 LWR 训练的数据重新训练 LWR。
每次操作的持续时间随着机器人经验的增加而减少,从 1704 s±244 s 降至 1406 s±112 s 和 1197 s。摄像引导质量(优/中/差)从 38.6/53.4/7.9 提高到 49.4/46.3/4.1%和 56.2/41.0/2.8%。
具有经验的认知型摄像机器人提高了其性能,为适应外科医生需求的新一代认知型手术机器人奠定了基础。