School of Computer and Electronic Information, Guangxi University, Nanning, Guangxi, 530004, People's Republic of China.
Guangxi Key Laboratory of Multimedia Communications Network Technology, Guangxi University, Nanning, Guangxi, 530004, People's Republic of China.
Phys Med Biol. 2024 Feb 28;69(5). doi: 10.1088/1361-6560/ad2717.
. Medical image affine registration is a crucial basis before using deformable registration. On the one hand, the traditional affine registration methods based on step-by-step optimization are very time-consuming, so these methods are not compatible with most real-time medical applications. On the other hand, convolutional neural networks are limited in modeling long-range spatial relationships of the features due to inductive biases, such as weight sharing and locality. This is not conducive to affine registration tasks. Therefore, the evolution of real-time and high-accuracy affine medical image registration algorithms is necessary for registration applications.. In this paper, we propose a deep learning-based coarse-to-fine global and local feature fusion architecture for fast affine registration, and we use an unsupervised approach for end-to-end training. We use multiscale convolutional kernels as our elemental convolutional blocks to enhance feature extraction. Then, to learn the long-range spatial relationships of the features, we propose a new affine registration framework with weighted global positional attention that fuses global feature mapping and local feature mapping. Moreover, the fusion regressor is designed to generate the affine parameters.. The additive fusion method can be adaptive to global mapping and local mapping, which improves affine registration accuracy without the center of mass initialization. In addition, the max pooling layer and the multiscale convolutional kernel coding module increase the ability of the model in affine registration.. We validate the effectiveness of our method on the OASIS dataset with 414 3D MRI brain maps. Comprehensive results demonstrate that our method achieves state-of-the-art affine registration accuracy and very efficient runtimes.
医学图像仿射配准是使用可变形配准之前的关键基础。一方面,基于逐步优化的传统仿射配准方法非常耗时,因此这些方法与大多数实时医学应用程序不兼容。另一方面,由于归纳偏差(如权重共享和局部性),卷积神经网络在建模特征的长程空间关系方面受到限制,这不利于仿射配准任务。因此,对于配准应用程序来说,实时、高精度的仿射医学图像配准算法的发展是必要的。
在本文中,我们提出了一种基于深度学习的粗到精全局和局部特征融合架构,用于快速仿射配准,并采用端到端的无监督方法进行训练。我们使用多尺度卷积核作为基本卷积块,以增强特征提取。然后,为了学习特征的长程空间关系,我们提出了一种新的带有加权全局位置注意力的仿射配准框架,融合了全局特征映射和局部特征映射。此外,融合回归器用于生成仿射参数。
加法融合方法可以自适应地处理全局映射和局部映射,从而在无需质心初始化的情况下提高仿射配准的准确性。此外,最大池化层和多尺度卷积核编码模块提高了模型在仿射配准中的能力。
我们在包含 414 个 3D MRI 脑图的 OASIS 数据集上验证了我们方法的有效性。综合结果表明,我们的方法达到了最先进的仿射配准精度,并且具有非常高效的运行时间。