Suppr超能文献

手术室中基于注视的感知驱动交互

Gaze-contingent perceptually enabled interactions in the operating theatre.

作者信息

Kogkas Alexandros A, Darzi Ara, Mylonas George P

机构信息

HARMS Lab, Department of Surgery and Cancer, Imperial College London, St Mary's Hospital, 20 South Wharf Road, 3rd Floor Paterson Centre, London, W21PF, UK.

出版信息

Int J Comput Assist Radiol Surg. 2017 Jul;12(7):1131-1140. doi: 10.1007/s11548-017-1580-y. Epub 2017 Apr 10.

Abstract

PURPOSE

Improved surgical outcome and patient safety in the operating theatre are constant challenges. We hypothesise that a framework that collects and utilises information -especially perceptually enabled ones-from multiple sources, could help to meet the above goals. This paper presents some core functionalities of a wider low-cost framework under development that allows perceptually enabled interaction within the surgical environment.

METHODS

The synergy of wearable eye-tracking and advanced computer vision methodologies, such as SLAM, is exploited. As a demonstration of one of the framework's possible functionalities, an articulated collaborative robotic arm and laser pointer is integrated and the set-up is used to project the surgeon's fixation point in 3D space.

RESULTS

The implementation is evaluated over 60 fixations on predefined targets, with distances between the subject and the targets of 92-212 cm and between the robot and the targets of 42-193 cm. The median overall system error is currently 3.98 cm. Its real-time potential is also highlighted.

CONCLUSIONS

The work presented here represents an introduction and preliminary experimental validation of core functionalities of a larger framework under development. The proposed framework is geared towards a safer and more efficient surgical theatre.

摘要

目的

在手术室中提高手术效果和患者安全性一直是挑战。我们假设一个能从多个来源收集并利用信息(尤其是感知驱动信息)的框架有助于实现上述目标。本文介绍了正在开发的一个更广泛的低成本框架的一些核心功能,该框架允许在手术环境中进行感知驱动的交互。

方法

利用可穿戴式眼动追踪技术与先进的计算机视觉方法(如同步定位与地图构建)的协同作用。作为该框架可能功能之一的演示,集成了一个关节式协作机器人手臂和激光指示器,并利用该装置在三维空间中投射外科医生的注视点。

结果

在对预定义目标进行60次注视的过程中对该实现进行了评估,受试者与目标之间的距离为92 - 212厘米,机器人与目标之间的距离为42 - 193厘米。目前整个系统的中位误差为3.98厘米。还突出了其实时潜力。

结论

本文介绍了一个正在开发的更大框架的核心功能,并进行了初步实验验证。所提出的框架旨在打造一个更安全、更高效的手术室。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fd08/5509830/5e53120465c5/11548_2017_1580_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验