Kogkas Alexandros A, Darzi Ara, Mylonas George P
HARMS Lab, Department of Surgery and Cancer, Imperial College London, St Mary's Hospital, 20 South Wharf Road, 3rd Floor Paterson Centre, London, W21PF, UK.
Int J Comput Assist Radiol Surg. 2017 Jul;12(7):1131-1140. doi: 10.1007/s11548-017-1580-y. Epub 2017 Apr 10.
Improved surgical outcome and patient safety in the operating theatre are constant challenges. We hypothesise that a framework that collects and utilises information -especially perceptually enabled ones-from multiple sources, could help to meet the above goals. This paper presents some core functionalities of a wider low-cost framework under development that allows perceptually enabled interaction within the surgical environment.
The synergy of wearable eye-tracking and advanced computer vision methodologies, such as SLAM, is exploited. As a demonstration of one of the framework's possible functionalities, an articulated collaborative robotic arm and laser pointer is integrated and the set-up is used to project the surgeon's fixation point in 3D space.
The implementation is evaluated over 60 fixations on predefined targets, with distances between the subject and the targets of 92-212 cm and between the robot and the targets of 42-193 cm. The median overall system error is currently 3.98 cm. Its real-time potential is also highlighted.
The work presented here represents an introduction and preliminary experimental validation of core functionalities of a larger framework under development. The proposed framework is geared towards a safer and more efficient surgical theatre.
在手术室中提高手术效果和患者安全性一直是挑战。我们假设一个能从多个来源收集并利用信息(尤其是感知驱动信息)的框架有助于实现上述目标。本文介绍了正在开发的一个更广泛的低成本框架的一些核心功能,该框架允许在手术环境中进行感知驱动的交互。
利用可穿戴式眼动追踪技术与先进的计算机视觉方法(如同步定位与地图构建)的协同作用。作为该框架可能功能之一的演示,集成了一个关节式协作机器人手臂和激光指示器,并利用该装置在三维空间中投射外科医生的注视点。
在对预定义目标进行60次注视的过程中对该实现进行了评估,受试者与目标之间的距离为92 - 212厘米,机器人与目标之间的距离为42 - 193厘米。目前整个系统的中位误差为3.98厘米。还突出了其实时潜力。
本文介绍了一个正在开发的更大框架的核心功能,并进行了初步实验验证。所提出的框架旨在打造一个更安全、更高效的手术室。