Schwarz Loren Arthur, Bigdelou Ali, Navab Nassir
Computer Aided Medical Procedures, Technische Universität München, Germany.
Med Image Comput Comput Assist Interv. 2011;14(Pt 1):129-36. doi: 10.1007/978-3-642-23623-5_17.
Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon's movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.
由于无菌要求和介入手术的复杂性,在手术室中与基于计算机的医疗设备进行交互对外科医生来说往往具有挑战性。典型的解决方案,如将交互任务委托给助手,可能效率低下。我们提出了一种在手术室中基于手势的交互方法,外科医生可以根据个人需求和介入工作流程进行定制。给定每个所需手势的训练示例,我们的系统学习低维流形模型,该模型能够识别手势并跟踪特定姿势以进行细粒度控制。通过使用几个无线可穿戴惯性传感器捕捉外科医生的动作,我们避免了基于摄像头的系统的问题,如对照明和遮挡的敏感性。使用基于组件的框架实现,我们的方法可以轻松连接到不同的医疗设备。我们的实验表明,该方法能够稳健地识别所学手势,并将其与其他动作区分开来。