Salome Patrick, Sforazzini Francesco, Grugnara Gianluca, Kudak Andreas, Dostal Matthias, Herold-Mende Christel, Heiland Sabine, Debus Jürgen, Abdollahi Amir, Knoll Maximilian
Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany.
Heidelberg Medical Faculty, Heidelberg University, 69117 Heidelberg, Germany.
Cancers (Basel). 2023 Mar 17;15(6):1820. doi: 10.3390/cancers15061820.
MR image classification in datasets collected from multiple sources is complicated by inconsistent and missing DICOM metadata. Therefore, we aimed to establish a method for the efficient automatic classification of MR brain sequences.
Deep convolutional neural networks (DCNN) were trained as one-vs-all classifiers to differentiate between six classes: T1 weighted (w), contrast-enhanced T1w, T2w, T2w-FLAIR, ADC, and SWI. Each classifier yields a probability, allowing threshold-based and relative probability assignment while excluding images with low probability (label: unknown, open-set recognition problem). Data from three high-grade glioma (HGG) cohorts was assessed; C1 (320 patients, 20,101 MRI images) was used for training, while C2 (197, 11,333) and C3 (256, 3522) were for testing. Two raters manually checked images through an interactive labeling tool. Finally, MR-Class' added value was evaluated via radiomics model performance for progression-free survival (PFS) prediction in C2, utilizing the concordance index (C-I).
Approximately 10% of annotation errors were observed in each cohort between the DICOM series descriptions and the derived labels. MR-Class accuracy was 96.7% [95% Cl: 95.8, 97.3] for C2 and 94.4% [93.6, 96.1] for C3. A total of 620 images were misclassified; manual assessment of those frequently showed motion artifacts or alterations of anatomy by large tumors. Implementation of MR-Class increased the PFS model C-I by 14.6% on average, compared to a model trained without MR-Class.
We provide a DCNN-based method for the sequence classification of brain MR images and demonstrate its usability in two independent HGG datasets.
从多个来源收集的数据集中的磁共振成像(MR)图像分类因DICOM元数据不一致和缺失而变得复杂。因此,我们旨在建立一种有效的脑磁共振序列自动分类方法。
深度卷积神经网络(DCNN)被训练为一对多分类器,以区分六个类别:T1加权(w)、对比增强T1w、T2w、T2w液体衰减反转恢复序列(FLAIR)、表观扩散系数(ADC)和磁敏感加权成像(SWI)。每个分类器产生一个概率,允许基于阈值和相对概率分配,同时排除概率低的图像(标签:未知,开放集识别问题)。评估了来自三个高级别胶质瘤(HGG)队列的数据;队列C1(320例患者,20101幅MRI图像)用于训练,而队列C2(197例,11333幅)和队列C3(256例,3522幅)用于测试。两名评估人员通过交互式标记工具手动检查图像。最后,通过C2中无进展生存期(PFS)预测的放射组学模型性能,利用一致性指数(C-I)评估MR-Class的附加值。
在每个队列中,DICOM系列描述和派生标签之间观察到约10%的注释错误。C2队列的MR-Class准确率为96.7%[95%可信区间(Cl):95.8,97.3],C3队列为94.4%[93.6,96.1]。共有620幅图像被错误分类;对这些图像的人工评估经常显示运动伪影或大肿瘤导致的解剖结构改变。与未使用MR-Class训练的模型相比,MR-Class的实施使PFS模型的C-I平均提高了14.6%。
我们提供了一种基于DCNN的脑MR图像序列分类方法,并在两个独立的HGG数据集中证明了其可用性。