Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine.
Department of Biomedical Engineering, Washington University School of Engineering and Applied Science, St. Louis, Missouri.
Otol Neurotol. 2018 Dec;39(10):e1137-e1142. doi: 10.1097/MAO.0000000000001995.
A mixed reality (MR) headset that enables three-dimensional (3D) visualization of interactive holograms anchored to specific points in physical space was developed for use with lateral skull base anatomy. The objectives of this study are to: 1) develop an augmented reality platform using the headset for visualization of temporal bone structures, and 2) measure the accuracy of the platform as an image guidance system.
A combination of semiautomatic and manual segmentation was used to generate 3D reconstructions of soft tissue and bony anatomy of cadaver heads and temporal bones from 2D computed tomography images. A Mixed-Reality platform was developed using C# programming to generate interactive 3D holograms that could be displayed in the HoloLens headset. Accuracy of visual surface registration was determined by target registration error between seven predefined points on a 3D holographic skull and 3D printed model.
Interactive 3D holograms of soft tissue, bony anatomy, and internal ear structures of cadaveric models were generated and visualized in the MR headset. Software user interface was developed to allow for user control of the virtual images through gaze, voice, and gesture commands. Visual surface point matching registration was used to align and anchor holograms to physical objects. The average target registration error of our system was 5.76 mm ± 0.54.
In this article, we demonstrate that an MR headset can be applied to display interactive 3D anatomic structures of the temporal bone that can be overlaid on physical models. This technology has the potential to be used as an image guidance tool during anatomic dissection and lateral skull base surgery.
开发了一种混合现实 (MR) 耳机,可将与物理空间中特定点相关联的交互式全息图进行三维 (3D) 可视化,用于侧颅底解剖。本研究的目的是:1) 使用耳机开发增强现实平台,用于可视化颞骨结构,2) 测量平台作为图像引导系统的准确性。
使用半自动和手动分割技术,从二维计算机断层扫描图像生成尸体头部和颞骨的软组织和骨骼解剖 3D 重建。使用 C# 编程开发了混合现实平台,以生成可在 HoloLens 耳机中显示的交互式 3D 全息图。通过在 3D 全息颅骨和 3D 打印模型上的七个预定义点之间的目标注册误差来确定视觉表面注册的准确性。
在 MR 耳机中生成并可视化了尸体模型的软组织、骨骼解剖和内耳结构的交互式 3D 全息图。开发了软件用户界面,允许用户通过注视、语音和手势命令控制虚拟图像。视觉表面点匹配注册用于对齐和固定全息图到物理对象。我们系统的平均目标注册误差为 5.76 毫米±0.54 毫米。
在本文中,我们证明了 MR 耳机可用于显示颞骨的交互式 3D 解剖结构,这些结构可以叠加在物理模型上。这项技术有可能成为解剖和侧颅底手术中图像引导工具。