Barber Samuel R, Jain Saurabh, Son Young-Jun, Almefty Kaith, Lawton Michael T, Stevens Shawn M
Department of Otolaryngology-Head and Neck Surgery, University of Arizona College of Medicine, Tucson, Arizona, United States.
Department of Systems and Industrial Engineering, University of Arizona, Tucson, Arizona, United States.
J Neurol Surg B Skull Base. 2021 Jul;82(Suppl 3):e268-e270. doi: 10.1055/s-0040-1701675. Epub 2020 Mar 13.
Current virtual reality (VR) technology allows the creation of instructional video formats that incorporate three-dimensional (3D) stereoscopic footage.Combined with 3D anatomic models, any surgical procedure or pathology could be represented virtually to supplement learning or surgical preoperative planning. We propose a standalone VR app that allows trainees to interact with modular 3D anatomic models corresponding to stereoscopic surgical videos. Stereoscopic video was recorded using an OPMI Pentero 900 microscope (Zeiss, Oberkochen, Germany). Digital Imaging and Communications in Medicine (DICOM) images segmented axial temporal bone computed tomography and each anatomic structure was exported separately. 3D models included semicircular canals, facial nerve, sigmoid sinus and jugular bulb, carotid artery, tegmen, canals within the temporal bone, cochlear and vestibular aqueducts, endolymphatic sac, and all branches for cranial nerves VII and VIII. Finished files were imported into the Unreal Engine. The resultant application was viewed using an Oculus Go. A VR environment facilitated viewing of stereoscopic video and interactive model manipulation using the VR controller. Interactive models allowed users to toggle transparency, enable highlighted segmentation, and activate labels for each anatomic structure. Based on 20 variable components, a value of 1.1 × 10 combinations of structures per DICOM series was possible for representing patient-specific anatomy in 3D. This investigation provides proof of concept that a hybrid of stereoscopic video and VR simulation is possible, and that this tool may significantly aid lateral skull base trainees as they learn to navigate a complex 3D surgical environment. Future studies will validate methodology.
当前的虚拟现实(VR)技术能够创建包含三维(3D)立体画面的教学视频格式。结合3D解剖模型,任何外科手术过程或病理情况都可以进行虚拟呈现,以辅助学习或手术术前规划。我们提出了一款独立的VR应用程序,它能让受训人员与对应立体手术视频的模块化3D解剖模型进行交互。
使用OPMI Pentero 900显微镜(德国蔡司公司,奥伯科亨)录制立体视频。对轴向颞骨计算机断层扫描的数字成像和通信医学(DICOM)图像进行分割,每个解剖结构单独导出。3D模型包括半规管、面神经、乙状窦和颈静脉球、颈动脉、骨板、颞骨内的管道、耳蜗和前庭导水管、内淋巴囊以及颅神经VII和VIII的所有分支。将完成的文件导入虚幻引擎。使用Oculus Go查看生成的应用程序。
VR环境便于观看立体视频,并使用VR控制器进行交互式模型操作。交互式模型允许用户切换透明度、启用突出显示的分割,并激活每个解剖结构的标签。基于20个可变组件,每个DICOM系列有1.1×10种结构组合的值,可用于以3D形式呈现患者特定的解剖结构。
这项研究提供了概念验证,即立体视频和VR模拟的结合是可行的,并且该工具可能会极大地帮助侧颅底受训人员学习在复杂的3D手术环境中进行操作。未来的研究将验证该方法。