Haouchine Nazim, Dorent Reuben, Juvekar Parikshit, Torio Erickson, Wells William M, Kapur Tina, Golby Alexandra J, Frisken Sarah
Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA.
Massachusetts Institute of Technology, Cambridge, MA, USA.
Med Image Comput Comput Assist Interv. 2023 Oct;14228:227-237. doi: 10.1007/978-3-031-43996-4_22. Epub 2023 Oct 1.
We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.
我们提出了一种通过学习预期外观进行术中患者与图像配准的新方法。我们的方法利用术前成像,通过手术显微镜针对预测的变换范围合成患者特定的预期视图。我们的方法通过最小化光学显微镜下的术中二维视图与合成的预期纹理之间的差异来估计相机姿态。与传统方法相比,我们的方法将处理任务转移到术前阶段,从而减少了低分辨率、失真和有噪声的术中图像的影响,这些图像常常会降低配准精度。我们在脑外科手术的神经导航背景下应用了我们的方法。我们在合成数据和来自6个临床病例的回顾性数据上评估了我们的方法。我们的方法优于现有方法,并达到了符合当前临床标准的精度。