Ma Rui, Dai Xuegang, Yang Zuochao, Wei Zhixiong, Zhang Bin
Information Center, Gansu Provincial Maternity and Child care Hospital (GansuProvincial Central Hospital), Gansu, China.
PLoS One. 2025 Jul 17;20(7):e0327642. doi: 10.1371/journal.pone.0327642. eCollection 2025.
Automated spinal structure segmentation in sagittal MRI remains a non-trivial task due to high inter-patient variability and ambiguous anatomical boundaries. We propose CAFR-Net, a Transformer-contrastive hybrid framework that jointly models global semantic relations and local anatomical priors to enable precise multi-class segmentation. The architecture integrates (1) a multi-scale Transformer encoder for long-range dependency modeling, (2) a Locally Adaptive Feature Recalibration (LAFR) module that reweights feature responses across spatial-channel dimensions, and (3) a Contrastive Learning-based Regularization (CLR) scheme enforcing pixel-level semantic alignment. Evaluated on the SpineMRI dataset, CAFR-Net achieves state-of-the-art performance, surpassing prior methods by a significant margin in Dice (92.04%), HD (3.52 mm), and mIoU (89.31%). These results underscore the framework's potential as a generalizable and reproducible solution for clinical-grade spinal image analysis.
由于患者之间的高度变异性和模糊的解剖边界,矢状面MRI中的自动脊柱结构分割仍然是一项具有挑战性的任务。我们提出了CAFR-Net,这是一个Transformer-对比混合框架,它联合对全局语义关系和局部解剖先验进行建模,以实现精确的多类分割。该架构集成了:(1) 用于长距离依赖建模的多尺度Transformer编码器;(2) 一个局部自适应特征重新校准(LAFR)模块,该模块在空间通道维度上重新加权特征响应;以及(3) 一种基于对比学习的正则化(CLR)方案,用于强制像素级语义对齐。在SpineMRI数据集上进行评估时,CAFR-Net取得了领先的性能,在Dice(92.04%)、HD(3.52毫米)和mIoU(89.31%)方面显著超过了先前的方法。这些结果强调了该框架作为临床级脊柱图像分析的可推广和可重复解决方案的潜力。