Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, People's Republic of China.
Adv Exp Med Biol. 2018;1093:207-224. doi: 10.1007/978-981-13-1396-7_17.
The human-machine interface (HMI) is an essential part of image-guided orthopedic navigation systems. HMI provides a primary platform to merge surgically relevant pre- and intraoperative images from different modalities and 3D models including anatomical structures and implants to support surgical planning and navigation. With the various input-output techniques of HMI, surgeons can intuitively manipulate anatomical models generated from medical images and/or implant models for surgical planning. Furthermore, HMI recreates sight, sound, and touch feedback for the guidance of surgery operations which helps surgeons to sense more relevant information, e.g., anatomical structures and surrounding tissue, the mechanical axis of limbs, and even the mechanical properties of tissue. Thus, with the help of interactive HMI, precision operations, such as cutting, drilling, and implantation, can be performed more easily and safely.Classic HMI is based on 2D displays and standard input devices of computers. In contrast, modern visual reality (VR) and augmented reality (AR) techniques allow the showing more information for surgical navigation. Various attempts have been applied to image-guided orthopedic therapy. In order to realize rapid image-based modeling and to create effective interaction and feedback, intelligent algorithms have been developed. Intelligent algorithms can realize fast registration of image to image and image to patients, and the algorithms to compensate the visual offset in AR display have been investigated. In order to accomplish more effective human-computer interaction, various input methods and force sensing/force reflecting methods have been developed. This chapter reviews related human-machine interface techniques for image-guided orthopedic navigation, analyzes several examples of clinical applications, and discusses the trend of intelligent HMI in orthopedic navigation.
人机界面(HMI)是影像引导骨科导航系统的重要组成部分。HMI 提供了一个主要平台,用于合并来自不同模态和 3D 模型的手术相关的术前和术中图像,包括解剖结构和植入物,以支持手术计划和导航。通过 HMI 的各种输入输出技术,外科医生可以直观地操作从医学图像和/或植入物模型生成的解剖模型,以进行手术计划。此外,HMI 为手术操作重现视觉、听觉和触觉反馈,帮助外科医生感知更相关的信息,例如解剖结构和周围组织、肢体的机械轴,甚至组织的机械特性。因此,借助交互式 HMI,可以更轻松、更安全地进行精确操作,例如切割、钻孔和植入。
传统的 HMI 基于 2D 显示器和计算机的标准输入设备。相比之下,现代虚拟现实(VR)和增强现实(AR)技术允许为手术导航显示更多信息。已经尝试将各种技术应用于影像引导骨科治疗。为了实现基于图像的快速建模,并创建有效的交互和反馈,已经开发了智能算法。智能算法可以实现图像到图像和图像到患者的快速配准,并且已经研究了用于补偿 AR 显示中的视觉偏移的算法。为了实现更有效的人机交互,已经开发了各种输入方法和力感测/力反射方法。本章回顾了用于影像引导骨科导航的相关人机界面技术,分析了几个临床应用实例,并讨论了骨科导航中智能 HMI 的趋势。