Gandhi Deep B, Khalili Nastaran, Familiar Ariana M, Gottipati Anurag, Khalili Neda, Tu Wenxin, Haldar Shuvanjan, Anderson Hannah, Viswanathan Karthik, Storm Phillip B, Ware Jeffrey B, Resnick Adam, Vossough Arastoo, Nabavizadeh Ali, Fathi Kazerooni Anahita
Center for Data-Driven Discovery in Biomedicine (D3b), The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, USA.
Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA.
Neurooncol Adv. 2024 Dec 12;6(1):vdae190. doi: 10.1093/noajnl/vdae190. eCollection 2024 Jan-Dec.
Fully automatic skull-stripping and tumor segmentation are crucial for monitoring pediatric brain tumors (PBT). Current methods, however, often lack generalizability, particularly for rare tumors in the sellar/suprasellar regions and when applied to real-world clinical data in limited data scenarios. To address these challenges, we propose AI-driven techniques for skull-stripping and tumor segmentation.
Multi-institutional, multi-parametric MRI scans from 527 pediatric patients ( = 336 for skull-stripping, = 489 for tumor segmentation) with various PBT histologies were processed to train separate nnU-Net-based deep learning models for skull-stripping, whole tumor (WT), and enhancing tumor (ET) segmentation. These models utilized single (T2/FLAIR) or multiple (T1-Gd and T2/FLAIR) input imaging sequences. Performance was evaluated using Dice scores, sensitivity, and 95% Hausdorff distances. Statistical comparisons included paired or unpaired 2-sample -tests and Pearson's correlation coefficient based on Dice scores from different models and PBT histologies.
Dice scores for the skull-stripping models for whole brain and sellar/suprasellar region segmentation were 0.98 ± 0.01 (median 0.98) for both multi- and single-parametric models, with significant Pearson's correlation coefficient between single- and multi-parametric Dice scores ( > 0.80; < .05 for all). Whole tumor Dice scores for single-input tumor segmentation models were 0.84 ± 0.17 (median = 0.90) for T2 and 0.82 ± 0.19 (median = 0.89) for FLAIR inputs. Enhancing tumor Dice scores were 0.65 ± 0.35 (median = 0.79) for T1-Gd+FLAIR and 0.64 ± 0.36 (median = 0.79) for T1-Gd+T2 inputs.
Our skull-stripping models demonstrate excellent performance and include sellar/suprasellar regions, using single- or multi-parametric inputs. Additionally, our automated tumor segmentation models can reliably delineate whole lesions and ET regions, adapting to MRI sessions with missing sequences in limited data context.
全自动颅骨剥离和肿瘤分割对于监测儿童脑肿瘤(PBT)至关重要。然而,当前的方法往往缺乏通用性,特别是对于鞍区/鞍上区的罕见肿瘤,以及在有限数据场景下应用于实际临床数据时。为应对这些挑战,我们提出了用于颅骨剥离和肿瘤分割的人工智能驱动技术。
对527例患有各种PBT组织学类型的儿科患者的多机构、多参数MRI扫描(颅骨剥离336例,肿瘤分割489例)进行处理,以训练基于nnU-Net的单独深度学习模型,用于颅骨剥离、全肿瘤(WT)和强化肿瘤(ET)分割。这些模型使用单(T2/FLAIR)或多(T1-Gd和T2/FLAIR)输入成像序列。使用Dice分数、灵敏度和95%豪斯多夫距离评估性能。统计比较包括基于来自不同模型和PBT组织学的Dice分数的配对或非配对双样本t检验和皮尔逊相关系数。
多参数和单参数模型的全脑和鞍区/鞍上区分割的颅骨剥离模型的Dice分数均为0.98±0.01(中位数0.98),单参数和多参数Dice分数之间存在显著的皮尔逊相关系数(>0.80;P<0.05)。单输入肿瘤分割模型的全肿瘤Dice分数,T2输入为0.84±0.17(中位数=0.90),FLAIR输入为0.82±0.19(中位数=0.89)。强化肿瘤Dice分数,T1-Gd+FLAIR为0.65±0.35(中位数=0.79),T1-Gd+T2输入为0.64±0.36(中位数=0.79)。
我们的颅骨剥离模型表现出优异的性能,包括鞍区/鞍上区,使用单参数或多参数输入。此外,我们的自动肿瘤分割模型可以可靠地勾勒出整个病变和ET区域,在有限数据背景下适应缺失序列的MRI检查。