Siemens Healthineers, Frimley, UK.
School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
Int J Comput Assist Radiol Surg. 2018 Aug;13(8):1141-1149. doi: 10.1007/s11548-018-1774-y. Epub 2018 May 12.
In cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application.
This paper proposes a model-to-image registration approach instead, because it is common in image-guided interventions to create anatomical models for diagnosis, planning or guidance prior to procedures. An imitation learning-based method, trained on 702 datasets, is used to register preoperative models to intraoperative X-ray images.
Accuracy is demonstrated on cardiac models and artificial X-rays generated from CTs. The registration error was [Formula: see text] on 1000 test cases, superior to that of manual ([Formula: see text]) and gradient-based ([Formula: see text]) registration. High robustness is shown in 19 clinical CRT cases.
Besides the proposed methods feasibility in a clinical environment, evaluation has shown good accuracy and high robustness indicating that it could be applied in image-guided interventions.
在心脏介入治疗中,如心脏再同步治疗(CRT),术前模型可以增强图像引导。然而,对于磁共振(MR)到 X 射线等根本不同的图像数据,多模态 3D/2D 配准仍然是一个重大的研究挑战。配准方法必须考虑到强度、对比度水平、分辨率、维度和视野的差异。此外,相同的解剖结构在两种模式中可能不可见。目前的方法主要侧重于为个别临床用例开发特定于模态的解决方案,通过引入约束条件或手动识别跨模态信息。机器学习方法有可能创建更通用的配准平台。然而,从图像到图像的训练方法将需要大型多模态数据集和每个目标应用的地面实况。
本文提出了一种模型到图像的配准方法,因为在图像引导的介入治疗中,通常在手术前创建解剖模型用于诊断、规划或指导。基于模仿学习的方法,在 702 个数据集上进行训练,用于将术前模型配准到术中 X 射线图像。
在心脏模型和从 CT 生成的人工 X 射线上进行了准确性验证。在 1000 个测试案例中,注册误差为[公式:见文本],优于手动([公式:见文本])和基于梯度([公式:见文本])的注册。在 19 例临床 CRT 病例中显示出很高的鲁棒性。
除了提出的方法在临床环境中的可行性外,评估还表明其具有良好的准确性和很高的鲁棒性,表明它可应用于图像引导的介入治疗。