Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA.
Department of Internal Medicine, Istanbul University Faculty of Medicine, Istanbul, Turkey.
Med Image Anal. 2025 Jan;99:103382. doi: 10.1016/j.media.2024.103382. Epub 2024 Nov 8.
Automated volumetric segmentation of the pancreas on cross-sectional imaging is needed for diagnosis and follow-up of pancreatic diseases. While CT-based pancreatic segmentation is more established, MRI-based segmentation methods are understudied, largely due to a lack of publicly available datasets, benchmarking research efforts, and domain-specific deep learning methods. In this retrospective study, we collected a large dataset (767 scans from 499 participants) of T1-weighted (T1 W) and T2-weighted (T2 W) abdominal MRI series from five centers between March 2004 and November 2022. We also collected CT scans of 1,350 patients from publicly available sources for benchmarking purposes. We introduced a new pancreas segmentation method, called PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation. We tested PanSegNet's accuracy in cross-modality (a total of 2,117 scans) and cross-center settings with Dice and Hausdorff distance (HD95) evaluation metrics. We used Cohen's kappa statistics for intra and inter-rater agreement evaluation and paired t-tests for volume and Dice comparisons, respectively. For segmentation accuracy, we achieved Dice coefficients of 88.3% (±7.2%, at case level) with CT, 85.0% (±7.9%) with T1 W MRI, and 86.3% (±6.4%) with T2 W MRI. There was a high correlation for pancreas volume prediction with R of 0.91, 0.84, and 0.85 for CT, T1 W, and T2 W, respectively. We found moderate inter-observer (0.624 and 0.638 for T1 W and T2 W MRI, respectively) and high intra-observer agreement scores. All MRI data is made available at https://osf.io/kysnj/. Our source code is available at https://github.com/NUBagciLab/PaNSegNet.
胰腺的横断面成像自动容积分割对于胰腺疾病的诊断和随访是必要的。虽然基于 CT 的胰腺分割方法更为成熟,但基于 MRI 的分割方法研究较少,主要是因为缺乏公开可用的数据集、基准研究工作和特定于领域的深度学习方法。在这项回顾性研究中,我们从五个中心收集了大量的 T1 加权(T1W)和 T2 加权(T2W)腹部 MRI 系列的数据集(2004 年 3 月至 2022 年 11 月期间来自 499 名参与者的 767 次扫描)。我们还从公开来源收集了 1350 名患者的 CT 扫描用于基准测试。我们引入了一种新的胰腺分割方法,称为 PanSegNet,它结合了 nnUNet 和 Transformer 网络的优势,并使用新的线性注意力模块实现了容积计算。我们使用 Dice 和 Hausdorff 距离(HD95)评估指标,在跨模态(总共 2117 次扫描)和跨中心设置中测试了 PanSegNet 的准确性。我们使用 Cohen 的 kappa 统计量进行内部和外部观察者的一致性评估,以及配对 t 检验进行体积和 Dice 比较。对于分割准确性,我们在 CT 上达到了 88.3%(±7.2%,在病例水平),在 T1W MRI 上达到了 85.0%(±7.9%),在 T2W MRI 上达到了 86.3%(±6.4%)。胰腺体积预测的相关性很高,R 分别为 0.91、0.84 和 0.85,用于 CT、T1W 和 T2W。我们发现观察者之间的相关性适中(T1W 和 T2W MRI 分别为 0.624 和 0.638),观察者内部的一致性评分很高。所有 MRI 数据都可在 https://osf.io/kysnj/ 获得。我们的源代码可在 https://github.com/NUBagciLab/PaNSegNet 获得。