Chiou Shin-Yan, Liu Li-Sheng, Lee Chia-Wei, Kim Dong-Hyun, Al-Masni Mohammed A, Liu Hao-Li, Wei Kuo-Chen, Yan Jiun-Lin, Chen Pin-Yuan
Department of Electrical Engineering, College of Engineering, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan.
Department of Nuclear Medicine, Linkou Chang Gung Memorial Hospital, Taoyuan 333, Taiwan.
Bioengineering (Basel). 2023 May 20;10(5):617. doi: 10.3390/bioengineering10050617.
Most current surgical navigation methods rely on optical navigators with images displayed on an external screen. However, minimizing distractions during surgery is critical and the spatial information displayed in this arrangement is non-intuitive. Previous studies have proposed combining optical navigation systems with augmented reality (AR) to provide surgeons with intuitive imaging during surgery, through the use of planar and three-dimensional imagery. However, these studies have mainly focused on visual aids and have paid relatively little attention to real surgical guidance aids. Moreover, the use of augmented reality reduces system stability and accuracy, and optical navigation systems are costly. Therefore, this paper proposed an augmented reality surgical navigation system based on image positioning that achieves the desired system advantages with low cost, high stability, and high accuracy. This system also provides intuitive guidance for the surgical target point, entry point, and trajectory. Once the surgeon uses the navigation stick to indicate the position of the surgical entry point, the connection between the surgical target and the surgical entry point is immediately displayed on the AR device (tablet or HoloLens glasses), and a dynamic auxiliary line is shown to assist with incision angle and depth. Clinical trials were conducted for EVD (extra-ventricular drainage) surgery, and surgeons confirmed the system's overall benefit. A "virtual object automatic scanning" method is proposed to achieve a high accuracy of 1 ± 0.1 mm for the AR-based system. Furthermore, a deep learning-based U-Net segmentation network is incorporated to enable automatic identification of the hydrocephalus location by the system. The system achieves improved recognition accuracy, sensitivity, and specificity of 99.93%, 93.85%, and 95.73%, respectively, representing a significant improvement from previous studies.
目前大多数手术导航方法依赖于光学导航仪,图像显示在外部屏幕上。然而,在手术过程中尽量减少干扰至关重要,并且这种布置中显示的空间信息不直观。先前的研究提出将光学导航系统与增强现实(AR)相结合,通过使用平面和三维图像,在手术过程中为外科医生提供直观的成像。然而,这些研究主要集中在视觉辅助方面,相对较少关注实际的手术引导辅助工具。此外,增强现实的使用会降低系统的稳定性和准确性,并且光学导航系统成本高昂。因此,本文提出了一种基于图像定位的增强现实手术导航系统,该系统以低成本、高稳定性和高精度实现了所需的系统优势。该系统还为手术靶点、入点和轨迹提供直观的引导。一旦外科医生使用导航棒指示手术入点的位置,手术靶点与手术入点之间的连接会立即显示在AR设备(平板电脑或HoloLens眼镜)上,并显示一条动态辅助线以辅助切口角度和深度。针对脑室外引流(EVD)手术进行了临床试验,外科医生确认了该系统的整体益处。提出了一种“虚拟物体自动扫描”方法,以使基于AR的系统实现1±0.1毫米的高精度。此外,并入了基于深度学习的U-Net分割网络,以使系统能够自动识别脑积水位置。该系统分别实现了99.93%、93.85%和95.73%的更高识别准确率、灵敏度和特异性,与先前的研究相比有显著提高。