Mukherjee Sovanlal, Antony Ajith, Patnam Nandakumar G, Trivedi Kamaxi H, Karbhari Aashna, Nagaraj Madhu, Murlidhar Murlidhar, Goenka Ajit H
Department of Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
Professor of Radiology, Consultant, Divisions of Abdominal and Nuclear Radiology, Co-Chair, Nuclear Radiology Research Operations, Chair, Enterprise PET/MR Research, Education and Executive Committee, Program Co-Leader, Risk Assessment, Early Detection and Interception (REDI), Mayo Clinic Comprehensive Cancer Center (MCCCC), 200 First St SW, Charlton 1, Rochester, MN, 55905, USA.
Sci Rep. 2025 May 16;15(1):17096. doi: 10.1038/s41598-025-01802-9.
Accurate and fully automated pancreas segmentation is critical for advancing imaging biomarkers in early pancreatic cancer detection and for biomarker discovery in endocrine and exocrine pancreatic diseases. We developed and evaluated a deep learning (DL)-based convolutional neural network (CNN) for automated pancreas segmentation using the largest single-institution dataset to date (n = 3031 CTs). Ground truth segmentations were performed by radiologists, which were used to train a 3D nnU-Net model through five-fold cross-validation, generating an ensemble of top-performing models. To assess generalizability, the model was externally validated on the multi-institutional AbdomenCT-1K dataset (n = 585), for which volumetric segmentations were newly generated by expert radiologists and will be made publicly available. In the test subset (n = 452), the CNN achieved a mean Dice Similarity Coefficient (DSC) of 0.94 (SD 0.05), demonstrating high spatial concordance with radiologist-annotated volumes (Concordance Correlation Coefficient [CCC]: 0.95). On the AbdomenCT-1K dataset, the model achieved a DSC of 0.96 (SD 0.04) and a CCC of 0.98, confirming its robustness across diverse imaging conditions. The proposed DL model establishes new performance benchmarks for fully automated pancreas segmentation, offering a scalable and generalizable solution for large-scale imaging biomarker research and clinical translation.
准确且完全自动化的胰腺分割对于推进早期胰腺癌检测中的成像生物标志物以及内分泌和外分泌胰腺疾病的生物标志物发现至关重要。我们使用迄今为止最大的单机构数据集(n = 3031例CT)开发并评估了一种基于深度学习(DL)的卷积神经网络(CNN)用于自动化胰腺分割。由放射科医生进行真实分割,通过五折交叉验证用于训练3D nnU-Net模型,生成一组表现最佳的模型。为了评估泛化能力,该模型在多机构的AbdomenCT-1K数据集(n = 585)上进行了外部验证,专家放射科医生新生成了该数据集的体积分割并将公开提供。在测试子集(n = 452)中,CNN实现了0.94(标准差0.05)的平均骰子相似系数(DSC),表明与放射科医生标注的体积具有高度空间一致性(一致性相关系数[CCC]:0.95)。在AbdomenCT-1K数据集上,该模型实现了0.96(标准差0.04)的DSC和0.98的CCC,证实了其在不同成像条件下的稳健性。所提出 的DL模型为完全自动化胰腺分割建立了新的性能基准,为大规模成像生物标志物研究和临床转化提供了一种可扩展且可泛化的解决方案。