Tai Yonghang, Qian Kai, Huang Xiaoqiao, Zhang Jun, Jan Mian Ahmad, Yu Zhengtao
Yunnan Key Laboratory of Opto-Electronic Information TechnologyYunnan Normal University Kunming 650500 China.
Department of Thoracic SurgeryYunnan First People's Hospital Kunming 650000 China.
IEEE Trans Industr Inform. 2021 Jan 19;17(9):6519-6527. doi: 10.1109/TII.2021.3052788. eCollection 2021 Sep.
A novel intelligent navigation technique for accurate image-guided COVID-19 lung biopsy is addressed, which systematically combines augmented reality (AR), customized haptic-enabled surgical tools, and deep neural network to achieve customized surgical navigation. Clinic data from 341 COVID-19 positive patients, with 1598 negative control group, have collected for the model synergy and evaluation. Biomechanics force data from the experiment are applied a WPD-CNN-LSTM (WCL) to learn a new patient-specific COVID-19 surgical model, and the ResNet was employed for the intraoperative force classification. To boost the user immersion and promote the user experience, intro-operational guiding images have combined with the haptic-AR navigational view. Furthermore, a 3-D user interface (3DUI), including all requisite surgical details, was developed with a real-time response guaranteed. Twenty-four thoracic surgeons were invited to the objective and subjective experiments for performance evaluation. The root-mean-square error results of our proposed WCL model is 0.0128, and the classification accuracy is 97%, which demonstrated that the innovative AR with deep learning (DL) intelligent model outperforms the existing perception navigation techniques with significantly higher performance. This article shows a novel framework in the interventional surgical integration for COVID-19 and opens the new research about the integration of AR, haptic rendering, and deep learning for surgical navigation.
本文提出了一种用于精确图像引导的新冠肺炎肺活检的新型智能导航技术,该技术系统地结合了增强现实(AR)、定制的触觉手术工具和深度神经网络,以实现定制化手术导航。已收集了341例新冠肺炎阳性患者的临床数据以及1598例阴性对照组数据,用于模型协同和评估。将实验中的生物力学力数据应用于WPD-CNN-LSTM(WCL),以学习新的针对特定患者的新冠肺炎手术模型,并采用ResNet进行术中力分类。为了增强用户沉浸感并提升用户体验,术中引导图像已与触觉AR导航视图相结合。此外,还开发了一个包含所有必要手术细节的3D用户界面(3DUI),并保证实时响应。邀请了24名胸外科医生参与客观和主观实验以进行性能评估。我们提出的WCL模型的均方根误差结果为0.0128,分类准确率为97%,这表明创新的带有深度学习(DL)的智能AR模型在性能上明显优于现有的感知导航技术。本文展示了一种用于新冠肺炎介入手术整合的新型框架,并开启了关于将AR、触觉渲染和深度学习整合用于手术导航的新研究。