Institute of Medical Informatics, University of Lübeck, Germany.
Institute of Medical Informatics, University of Lübeck, Germany.
Med Image Anal. 2021 Jan;67:101822. doi: 10.1016/j.media.2020.101822. Epub 2020 Oct 6.
Methods for deep learning based medical image registration have only recently approached the quality of classical model-based image alignment. The dual challenge of both a very large trainable parameter space and often insufficient availability of expert supervised correspondence annotations has led to slower progress compared to other domains such as image segmentation. Yet, image registration could also more directly benefit from an iterative solution than segmentation. We therefore believe that significant improvements, in particular for multi-modal registration, can be achieved by disentangling appearance-based feature learning and deformation estimation. In this work, we examine an end-to-end trainable, weakly-supervised deep learning-based feature extraction approach that is able to map the complex appearance to a common space. Our results on thoracoabdominal CT and MRI image registration show that the proposed method compares favourably well to state-of-the-art hand-crafted multi-modal features, Mutual Information-based approaches and fully-integrated CNN-based methods - and handles even the limitation of small and only weakly-labeled training data sets.
基于深度学习的医学图像配准方法最近才达到经典基于模型的图像配准的质量水平。与其他领域(例如图像分割)相比,这面临着两个非常大的可训练参数空间和通常专家监督对应注释不足的双重挑战,导致进展缓慢。然而,图像配准可能比分割更直接受益于迭代解决方案。因此,我们相信通过解耦基于外观的特征学习和变形估计,可以在多模态配准方面取得重大进展。在这项工作中,我们研究了一种端到端可训练的、弱监督的基于深度学习的特征提取方法,该方法能够将复杂的外观映射到公共空间。我们在胸腹部 CT 和 MRI 图像配准上的结果表明,所提出的方法与最先进的手工制作的多模态特征、基于互信息的方法和完全集成的基于 CNN 的方法相比具有很大的优势,并且即使处理小的且仅弱标记的训练数据集的限制也能很好地处理。