Suppr超能文献

用于高效可变形医学图像配准的轻量级跨分辨率粗到细网络

Lightweight cross-resolution coarse-to-fine network for efficient deformable medical image registration.

作者信息

Liu Jun, Shen Nuo, Wang Wenyi, Li Xiangyu, Wang Wei, Yuan Yongfeng, Tian Ye, Luo Gongning, Wang Kuanquan

机构信息

School of Computer Science and Technology, Harbin Institute of Technology, Harbin, Heilongjiang, China.

School of Computer Science and Technology, Harbin Institute of Technology Shenzhen, Shenzhen, Guangdong, China.

出版信息

Med Phys. 2025 Apr 25. doi: 10.1002/mp.17827.

Abstract

BACKGROUND

Accurate and efficient deformable medical image registration is crucial in medical image analysis. While recent deep learning-based registration methods have achieved state-of-the-art accuracy, they often suffer from extensive network parameters and slow inference times, leading to inefficiency. Efforts to reduce model size and input resolution can improve computational efficiency but frequently result in suboptimal accuracy.

PURPOSE

To address the trade-off between high accuracy and efficiency, we propose a Lightweight Cross-Resolution Coarse-to-Fine registration framework, termed LightCRCF.

METHODS

Our method is built on an ultra-lightweight U-Net architecture with only 0.1 million parameters, offering remarkable efficiency. To mitigate accuracy degradation resulting from fewer parameters while preserving the lightweight nature of the networks, LightCRCF introduces three key innovations as follows: (1) selecting an efficient cross-resolution coarse-to-fine (C2F) registration strategy and integrating it into the lightweight network to progressively decompose the deformation fields into multiresolution subfields to capture fine-grained deformations; (2) a Texture-aware Reparameterization (TaRep) module that integrates Sobel and Laplacian operators to extract rich textural information; (3) a Group-flow Reparameterization (GfRep) module that captures diverse deformation modes by decomposing the deformation field into multiple groups. Furthermore, we introduce a structural reparameterization technique that enhances training accuracy through multibranch structures of the TaRep and GfRep modules, while maintaining efficient inference by equivalently transforming these multibranch structures into single-branch standard convolutions.

RESULTS

We evaluate LightCRCF against various methods on the three public MRI datasets (LPBA, OASIS, and ACDC) and one CT dataset (abdomen CT). Following the previous data division methods, the LPBA dataset comprises 30 training image pairs and nine testing image pairs. For the OASIS dataset, the training, validation, and testing data consist of 1275, 110, and 660 image pairs, respectively. Similarly, for the ACDC dataset, the training, validation, and testing data include 180, 20, and 100 image pairs, respectively. For intersubject registration of the abdomen CT dataset, there are 380 training pairs, six validation pairs, and 42 testing pairs. Compared to state-of-the-art C2F methods, LightCRCF achieves comparable accuracy scores (DSC, HD95, and MSE), while demonstrating significantly superior performance across all efficiency metrics (Params, VRAM, FLOPs, and inference time). Relative to efficiency-first approaches, LightCRCF significantly outperforms these methods in accuracy metrics.

CONCLUSIONS

Our LightCRCF method offers a favorable trade-off between accuracy and efficiency, maintaining high accuracy while achieving superior efficiency, thereby highlighting its potential for clinical applications. The code will be available at https://github.com/PerceptionComputingLab/LightCRCF.

摘要

背景

准确且高效的可变形医学图像配准在医学图像分析中至关重要。虽然最近基于深度学习的配准方法已达到了当前最优的精度,但它们通常存在大量网络参数和推理时间长的问题,导致效率低下。减少模型大小和输入分辨率的努力可以提高计算效率,但常常会导致精度次优。

目的

为了解决高精度与效率之间的权衡问题,我们提出了一种轻量级的跨分辨率粗到精配准框架,称为LightCRCF。

方法

我们的方法基于一个仅有10万个参数的超轻量级U-Net架构构建,具有显著的效率。为了在保持网络轻量级特性的同时减轻因参数减少导致的精度下降,LightCRCF引入了三项关键创新:(1)选择一种高效的跨分辨率粗到精(C2F)配准策略,并将其集成到轻量级网络中,以逐步将变形场分解为多分辨率子场,从而捕捉细粒度变形;(2)一个纹理感知重参数化(TaRep)模块,该模块集成了Sobel和Laplacian算子以提取丰富的纹理信息;(3)一个组流重参数化(GfRep)模块,该模块通过将变形场分解为多个组来捕捉不同的变形模式。此外,我们引入了一种结构重参数化技术,通过TaRep和GfRep模块的多分支结构提高训练精度,同时通过将这些多分支结构等效转换为单分支标准卷积来保持高效推理。

结果

我们在三个公开的MRI数据集(LPBA、OASIS和ACDC)和一个CT数据集(腹部CT)上,将LightCRCF与各种方法进行了评估。按照先前的数据划分方法,LPBA数据集包含30对训练图像和9对测试图像。对于OASIS数据集,训练、验证和测试数据分别由1275、110和660对图像组成。同样,对于ACDC数据集,训练、验证和测试数据分别包括180、20和100对图像。对于腹部CT数据集的受试者间配准,有380对训练数据、6对验证数据和42对测试数据。与当前最优的C2F方法相比,LightCRCF实现了相当的精度分数(DSC、HD95和MSE),同时在所有效率指标(参数、VRAM、FLOP和推理时间)上表现出显著更优的性能。相对于效率优先的方法,LightCRCF在精度指标上显著优于这些方法。

结论

我们的LightCRCF方法在精度和效率之间实现了良好的权衡,在保持高精度的同时实现了卓越的效率,从而突出了其在临床应用中的潜力。代码将在https://github.com/PerceptionComputingLab/LightCRCF上提供。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验