Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
Med Image Anal. 2021 Jul;71:102041. doi: 10.1016/j.media.2021.102041. Epub 2021 Mar 21.
Multimodal image registration has many applications in diagnostic medical imaging and image-guided interventions, such as Transcatheter Arterial Chemoembolization (TACE) of liver cancer guided by intraprocedural CBCT and pre-operative MR. The ability to register peri-procedurally acquired diagnostic images into the intraprocedural environment can potentially improve the intra-procedural tumor targeting, which will significantly improve therapeutic outcomes. However, the intra-procedural CBCT often suffers from suboptimal image quality due to lack of signal calibration for Hounsfield unit, limited FOV, and motion/metal artifacts. These non-ideal conditions make standard intensity-based multimodal registration methods infeasible to generate correct transformation across modalities. While registration based on anatomic structures, such as segmentation or landmarks, provides an efficient alternative, such anatomic structure information is not always available. One can train a deep learning-based anatomy extractor, but it requires large-scale manual annotations on specific modalities, which are often extremely time-consuming to obtain and require expert radiological readers. To tackle these issues, we leverage annotated datasets already existing in a source modality and propose an anatomy-preserving domain adaptation to segmentation network (APA2Seg-Net) for learning segmentation without target modality ground truth. The segmenters are then integrated into our anatomy-guided multimodal registration based on the robust point matching machine. Our experimental results on in-house TACE patient data demonstrated that our APA2Seg-Net can generate robust CBCT and MR liver segmentation, and the anatomy-guided registration framework with these segmenters can provide high-quality multimodal registrations.
多模态图像配准在诊断医学成像和图像引导介入治疗中有许多应用,例如经导管动脉化疗栓塞(TACE)治疗肝癌时,术中使用 CBCT 和术前 MR 进行引导。将术中获得的诊断图像配准到术中环境中的能力有可能提高术中肿瘤靶向定位的准确性,从而显著改善治疗效果。然而,由于缺乏用于亨氏单位的信号校准、有限的视野和运动/金属伪影,术中 CBCT 通常图像质量较差。这些不理想的条件使得基于标准强度的多模态配准方法无法在模态之间生成正确的变换。基于解剖结构的配准(如分割或标记)提供了一种有效的替代方法,但这种解剖结构信息并不总是可用。人们可以训练基于深度学习的解剖提取器,但它需要在特定模态上进行大规模的手动注释,而这些注释通常非常耗时,并且需要专家放射科医生的参与。为了解决这些问题,我们利用源模态中已经存在的注释数据集,并提出了一种基于解剖保留的域自适应分割网络(APA2Seg-Net),用于在没有目标模态真实值的情况下学习分割。然后,分割器被集成到我们基于稳健点匹配机的解剖引导多模态配准框架中。我们在内部 TACE 患者数据上的实验结果表明,我们的 APA2Seg-Net 可以生成稳健的 CBCT 和 MR 肝脏分割,并且使用这些分割器的解剖引导配准框架可以提供高质量的多模态配准。