Gonzalez-Romo Nicolas I, Hanalioglu Sahin, Mignucci-Jiménez Giancarlo, Abramov Irakliy, Xu Yuan, Preul Mark C
Department of Neurosurgery, The Loyal and Edith Davis Neurosurgical Research Laboratory, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, AZ, USA.
Oper Neurosurg (Hagerstown). 2023 Apr 1;24(4):432-444. doi: 10.1227/ons.0000000000000544. Epub 2022 Dec 23.
Immersive anatomic environments offer an alternative when anatomic laboratory access is limited, but current three-dimensional (3D) renderings are not able to simulate the anatomic detail and surgical perspectives needed for microsurgical education.
To perform a proof-of-concept study of a novel photogrammetry 3D reconstruction technique, converting high-definition (monoscopic) microsurgical images into a navigable, interactive, immersive anatomy simulation.
Images were acquired from cadaveric dissections and from an open-access comprehensive online microsurgical anatomic image database. A pretrained neural network capable of depth estimation from a single image was used to create depth maps (pixelated images containing distance information that could be used for spatial reprojection and 3D rendering). Virtual reality (VR) experience was assessed using a VR headset, and augmented reality was assessed using a quick response code-based application and a tablet camera.
Significant correlation was found between processed image depth estimations and neuronavigation-defined coordinates at different levels of magnification. Immersive anatomic models were created from dissection images captured in the authors' laboratory and from images retrieved from the Rhoton Collection. Interactive visualization and magnification allowed multiple perspectives for an enhanced experience in VR. The quick response code offered a convenient method for importing anatomic models into the real world for rehearsal and for comparing other anatomic preparations side by side.
This proof-of-concept study validated the use of machine learning to render 3D reconstructions from 2-dimensional microsurgical images through depth estimation. This spatial information can be used to develop convenient, realistic, and immersive anatomy image models.
当解剖实验室的使用受限 时,沉浸式解剖环境提供了一种替代方案,但目前的三维(3D)渲染无法模拟显微外科教育所需的解剖细节和手术视角。
对一种新型摄影测量3D重建技术进行概念验证研究,将高清(单视场)显微手术图像转换为可导航、交互式、沉浸式的解剖模拟。
图像采集自尸体解剖和一个开放获取的综合在线显微外科解剖图像数据库。使用一个能够从单张图像进行深度估计的预训练神经网络来创建深度图(包含可用于空间重投影和3D渲染的距离信息的像素化图像)。使用虚拟现实(VR)头戴设备评估VR体验,使用基于二维码的应用程序和平板相机评估增强现实。
在不同放大倍数下,处理后的图像深度估计与神经导航定义的坐标之间存在显著相关性。从作者实验室拍摄的解剖图像以及从Rhoton Collection检索的图像中创建了沉浸式解剖模型。交互式可视化和放大功能允许多个视角,以增强VR体验。二维码提供了一种方便的方法,可将解剖模型导入现实世界进行演练,并可并排比较其他解剖标本。
这项概念验证研究验证了使用机器学习通过深度估计从二维显微手术图像渲染3D重建的方法。这种空间信息可用于开发方便、逼真和沉浸式的解剖图像模型。