qMRI Core Facility, NINDS, National Institutes of Health, Bethesda, MD.
Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom.
Top Magn Reson Imaging. 2022 Jun 1;31(3):31-39. doi: 10.1097/RMR.0000000000000296. Epub 2022 Jun 28.
OBJECTIVES: Automated whole brain segmentation from magnetic resonance images is of great interest for the development of clinically relevant volumetric markers for various neurological diseases. Although deep learning methods have demonstrated remarkable potential in this area, they may perform poorly in nonoptimal conditions, such as limited training data availability. Manual whole brain segmentation is an incredibly tedious process, so minimizing the data set size required for training segmentation algorithms may be of wide interest. The purpose of this study was to compare the performance of the prototypical deep learning segmentation architecture (U-Net) with a previously published atlas-free traditional machine learning method, Classification using Derivative-based Features (C-DEF) for whole brain segmentation, in the setting of limited training data. MATERIALS AND METHODS: C-DEF and U-Net models were evaluated after training on manually curated data from 5, 10, and 15 participants in 2 research cohorts: (1) people living with clinically diagnosed HIV infection and (2) relapsing-remitting multiple sclerosis, each acquired at separate institutions, and between 5 and 295 participants' data using a large, publicly available, and annotated data set of glioblastoma and lower grade glioma (brain tumor segmentation). Statistics was performed on the Dice similarity coefficient using repeated-measures analysis of variance and Dunnett-Hsu pairwise comparison. RESULTS: C-DEF produced better segmentation than U-Net in lesion (29.2%-38.9%) and cerebrospinal fluid (5.3%-11.9%) classes when trained with data from 15 or fewer participants. Unlike C-DEF, U-Net showed significant improvement when increasing the size of the training data (24%-30% higher than baseline). In the brain tumor segmentation data set, C-DEF produced equivalent or better segmentations than U-Net for enhancing tumor and peritumoral edema regions across all training data sizes explored. However, U-Net was more effective than C-DEF for segmentation of necrotic/non-enhancing tumor when trained on 10 or more participants, probably because of the inconsistent signal intensity of the tissue class. CONCLUSIONS: These results demonstrate that classical machine learning methods can produce more accurate brain segmentation than the far more complex deep learning methods when only small or moderate amounts of training data are available (n ≤ 15). The magnitude of this advantage varies by tissue and cohort, while U-Net may be preferable for deep gray matter and necrotic/non-enhancing tumor segmentation, particularly with larger training data sets (n ≥ 20). Given that segmentation models often need to be retrained for application to novel imaging protocols or pathology, the bottleneck associated with large-scale manual annotation could be avoided with classical machine learning algorithms, such as C-DEF.
目的:从磁共振图像中自动进行全脑分割对于开发各种神经疾病的临床相关体积标志物具有重要意义。尽管深度学习方法在这一领域表现出了巨大的潜力,但它们在数据有限的情况下可能表现不佳。手动全脑分割是一个非常繁琐的过程,因此最小化训练分割算法所需的数据量可能具有广泛的意义。本研究的目的是比较原型深度学习分割架构(U-Net)与之前发表的无图谱的传统机器学习方法,即基于导数特征的分类(C-DEF),在数据有限的情况下,对全脑分割的性能进行比较。
材料和方法:在 2 个研究队列中,对来自 5 个、10 个和 15 个参与者的经过手工整理的数据进行训练后,评估了 C-DEF 和 U-Net 模型:(1)患有临床诊断的 HIV 感染的人,以及(2)复发缓解型多发性硬化症,这两个队列分别在不同的机构采集,并且使用一个大型的、公开的、带有注释的脑肿瘤和低级别脑肿瘤(脑肿瘤分割)数据集,对 5 到 295 个参与者的数据进行了训练。使用重复测量方差分析和 Dunnett-Hsu 配对比较进行了基于 Dice 相似系数的统计学分析。
结果:当使用 15 个或更少参与者的数据进行训练时,C-DEF 在病变(29.2%-38.9%)和脑脊液(5.3%-11.9%)类别中的分割效果优于 U-Net。与 C-DEF 不同的是,U-Net 在增加训练数据的大小(比基线高 24%-30%)时,性能显著提高。在脑肿瘤分割数据集上,对于增强肿瘤和肿瘤周围水肿区域,C-DEF 的分割效果与 U-Net 相当或更好,在所有探索的训练数据大小上均如此。然而,当在 10 个或更多参与者上进行训练时,U-Net 对坏死/无增强肿瘤的分割比 C-DEF 更有效,这可能是因为组织类别的信号强度不一致。
结论:这些结果表明,当只有少量或中等数量的训练数据(n≤15)时,经典的机器学习方法可以比复杂得多的深度学习方法产生更准确的脑分割。这种优势的大小因组织和队列而异,而 U-Net 可能更适合于深灰质和坏死/无增强肿瘤的分割,特别是在较大的训练数据集(n≥20)上。鉴于分割模型通常需要针对新的成像协议或病理学进行重新训练,因此使用经典的机器学习算法(如 C-DEF)可以避免与大规模手动注释相关的瓶颈。
AJNR Am J Neuroradiol. 2024-8-9
Front Neuroimaging. 2023-12-1