Baldeon-Calisto Maria, Wei Zhouping, Abudalou Shatha, Yilmaz Yasin, Gage Kenneth, Pow-Sang Julio, Balagurunathan Yoganand
Departamento de Ingeniería Industrial and Instituto de Innovación en Productividad y Logística CATENA-USFQ, Universidad San Francisco de Quito, Quito, Ecuador.
Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States.
Front Nucl Med. 2023 Feb 6;2:1083245. doi: 10.3389/fnume.2022.1083245. eCollection 2022.
Prostate gland segmentation is the primary step to estimate gland volume, which aids in the prostate disease management. In this study, we present a 2D-3D convolutional neural network (CNN) ensemble that automatically segments the whole prostate gland along with the peripheral zone (PZ) (PPZ-SegNet) using a T2-weighted sequence (T2W) of Magnetic Resonance Imaging (MRI). The study used 4 different public data sets organized as Train #1 and Test #1 (independently derived from the same cohort), Test #2, Test #3 and Test #4. The prostate gland and the peripheral zone (PZ) anatomy were manually delineated with consensus read by a radiologist, except for Test #4 cohorts that had pre-marked glandular anatomy. A Bayesian hyperparameter optimization method was applied to construct the network model (PPZ-SegNet) with a training cohort (Train #1, = 150) using a five-fold cross validation. The model evaluation was performed on an independent cohort of 283 T2W MRI prostate cases (Test #1 to #4) without any additional tuning. The data cohorts were derived from The Cancer Imaging Archives (TCIA): PROSTATEx Challenge, Prostatectomy, Repeatability studies and PROMISE12-Challenge. The segmentation performance was evaluated by computing the Dice similarity coefficient and Hausdorff distance between the estimated-deep-network identified regions and the radiologist-drawn annotations. The deep network architecture was able to segment the prostate gland anatomy with an average Dice score of 0.86 in Test #1 ( = 192), 0.79 in Test #2 ( = 26), 0.81 in Test #3 ( = 15), and 0.62 in Test #4 ( = 50). We also found the Dice coefficient improved with larger prostate volumes in 3 of the 4 test cohorts. The variation of the Dice scores from different cohorts of test images suggests the necessity of more diverse models that are inclusive of dependencies such as the gland sizes and others, which will enable us to develop a universal network for prostate and PZ segmentation. Our training and evaluation code can be accessed through the link: https://github.com/mariabaldeon/PPZ-SegNet.git.
前列腺分割是估计腺体体积的首要步骤,这有助于前列腺疾病的管理。在本研究中,我们提出了一种二维-三维卷积神经网络(CNN)集成模型,该模型使用磁共振成像(MRI)的T2加权序列(T2W)自动分割整个前列腺以及外周区(PZ)(PPZ-SegNet)。该研究使用了4个不同的公共数据集,分别组织为训练集#1和测试集#1(独立来源于同一队列)、测试集#2、测试集#3和测试集#4。除了测试集#4队列中已预先标记的腺体解剖结构外,前列腺和外周区(PZ)的解剖结构由放射科医生通过共识阅读进行手动勾勒。应用贝叶斯超参数优化方法,使用五折交叉验证,以训练队列(训练集#1,n = 150)构建网络模型(PPZ-SegNet)。在没有任何额外调整的情况下,对283例T2W MRI前列腺病例的独立队列(测试集#1至#4)进行模型评估。数据队列来自癌症影像存档(TCIA):PROSTATEx挑战赛、前列腺切除术、重复性研究和PROMISE12挑战赛。通过计算估计的深度网络识别区域与放射科医生绘制的注释之间的骰子相似系数和豪斯多夫距离来评估分割性能。深度网络架构在测试集#1(n = 192)中能够以平均骰子分数0.86分割前列腺解剖结构,在测试集#2(n = 26)中为0.79,在测试集#3(n = 15)中为0.81,在测试集#4(n = 50)中为0.62。我们还发现在4个测试队列中的3个队列中,随着前列腺体积增大,骰子系数有所提高。来自不同测试图像队列的骰子分数变化表明,需要有更多包含腺体大小等依赖性的多样化模型,这将使我们能够开发出用于前列腺和PZ分割的通用网络。我们的训练和评估代码可通过以下链接访问:https://github.com/mariabaldeon/PPZ-SegNet.git。