IEEE J Biomed Health Inform. 2022 Mar;26(3):1177-1187. doi: 10.1109/JBHI.2021.3095409. Epub 2022 Mar 7.
Deformable medical image registration estimates corresponding deformation to align the regions of interest (ROIs) of two images to a same spatial coordinate system. However, recent unsupervised registration models only have correspondence ability without perception, making misalignment on blurred anatomies and distortion on task-unconcerned backgrounds. Label-constrained (LC) registration models embed the perception ability via labels, but the lack of texture constraints in labels and the expensive labeling costs causes distortion internal ROIs and overfitted perception. We propose the first few-shot deformable medical image registration framework, Perception-Correspondence Registration (PC-Reg), which embeds perception ability to registration models only with few labels, thus greatly improving registration accuracy and reducing distortion. 1) We propose the Perception-Correspondence Decoupling which decouples the perception and correspondence actions of registration to two CNNs. Therefore, independent optimizations and feature representations are available avoiding interference of the correspondence due to the lack of texture constraints. 2) For few-shot learning, we propose Reverse Teaching which aligns labeled and unlabeled images to each other to provide supervision information to the structure and style knowledge in unlabeled images, thus generating additional training data. Therefore, these data will reversely teach our perception CNN more style and structure knowledge, improving its generalization ability. Our experiments on three datasets with only five labels demonstrate that our PC-Reg has competitive registration accuracy and effective distortion-reducing ability. Compared with LC-VoxelMorph( λ = 1), we achieve the 12.5%, 6.3% and 1.0% Reg-DSC improvements on three datasets, revealing our framework with great potential in clinical application.
可变形医学图像配准估计相应的变形,以将两幅图像的感兴趣区域 (ROI) 对齐到同一空间坐标系。然而,最近的无监督配准模型仅具有对应能力而没有感知能力,这导致在模糊解剖结构和与任务无关的背景上出现配准错误。基于标签的配准模型通过标签嵌入感知能力,但标签中缺乏纹理约束和昂贵的标记成本导致 ROI 内部失真和感知过度拟合。我们提出了第一个少样本可变形医学图像配准框架,感知-对应配准 (PC-Reg),它仅使用少量标签将感知能力嵌入到配准模型中,从而大大提高了配准精度并减少了失真。1)我们提出了感知-对应解耦,它将配准的感知和对应动作解耦为两个 CNN。因此,可以进行独立的优化和特征表示,避免由于缺乏纹理约束而导致对应关系的干扰。2)对于少样本学习,我们提出了反向教学,它将标记和未标记的图像相互对齐,为未标记图像中的结构和样式知识提供监督信息,从而生成额外的训练数据。因此,这些数据将反向教授我们的感知 CNN 更多的样式和结构知识,提高其泛化能力。我们在仅使用五个标签的三个数据集上的实验表明,我们的 PC-Reg 具有有竞争力的配准精度和有效的减少失真能力。与 LC-VoxelMorph(λ=1)相比,我们在三个数据集上分别实现了 12.5%、6.3%和 1.0%的 Reg-DSC 提高,这表明我们的框架在临床应用中具有很大的潜力。