Yang Qi, Yu Xin, Lee Ho Hin, Tang Yucheng, Bao Shunxing, Gravenstein Kristofer S, Moore Ann Zenobia, Makrogiannis Sokratis, Ferrucci Luigi, Landman Bennett A
Computer Science, Vanderbilt University, TN.
Electrical and Computer Engineering, Vanderbilt University, TN.
Proc SPIE Int Soc Opt Eng. 2022 Feb-Mar;12032. doi: 10.1117/12.2611664. Epub 2022 Apr 4.
Muscle, bone, and fat segmentation of CT thigh slice is essential for body composition research. Voxel-wise image segmentation enables quantification of tissue properties including area, intensity and texture. Deep learning approaches have had substantial success in medical image segmentation, but they typically require substantial data. Due to high cost of manual annotation, training deep learning models with limited human labelled data is desirable but also a challenging problem. Inspired by transfer learning, we proposed a two-stage deep learning pipeline to address this issue in thigh segmentation. We study 2836 slices from Baltimore Longitudinal Study of Aging (BLSA) and 121 slices from Genetic and Epigenetic Signatures of Translational Aging Laboratory Testing (GESTALT). First, we generated pseudo-labels based on approximate hand-crafted approaches using CT intensity and anatomical morphology. Then, those pseudo labels are fed into deep neural networks to train models from scratch. Finally, the first stage model is loaded as initialization and fine-tuned with a more limited set of expert human labels. We evaluate the performance of this framework on 56 thigh CT scans and obtained average Dice of 0.979,0.969,0.953,0.980 and 0.800 for five tissues: muscle, cortical bone, internal bone, subcutaneous fat and intermuscular fat respectively. We evaluated generalizability by manually reviewing external 3504 BLSA single thighs from 1752 thigh slices. The result is consistent and passed human review with 150 failed thigh images, which demonstrates that the proposed method has strong generalizability.
CT大腿切片的肌肉、骨骼和脂肪分割对于身体成分研究至关重要。逐体素图像分割能够对包括面积、强度和纹理在内的组织特性进行量化。深度学习方法在医学图像分割方面取得了巨大成功,但通常需要大量数据。由于手动标注成本高昂,使用有限的人工标注数据训练深度学习模型既必要又具有挑战性。受迁移学习的启发,我们提出了一种两阶段深度学习管道来解决大腿分割中的这一问题。我们研究了来自巴尔的摩衰老纵向研究(BLSA)的2836张切片以及来自转化衰老实验室测试的遗传和表观遗传特征(GESTALT)的121张切片。首先,我们基于使用CT强度和解剖形态的近似手工方法生成伪标签。然后,将这些伪标签输入深度神经网络以从头开始训练模型。最后,加载第一阶段模型作为初始化,并使用更有限的一组专家人工标签进行微调。我们在56张大腿CT扫描图像上评估了该框架的性能,分别获得了五种组织(肌肉、皮质骨、内部骨、皮下脂肪和肌间脂肪)的平均骰子系数为0.979、0.969、0.953、0.980和0.800。我们通过人工审查来自1752张大腿切片的外部3504张BLSA单一大腿图像来评估泛化能力。结果是一致的,并且在150张失败的大腿图像的情况下通过了人工审查,这表明所提出的方法具有很强的泛化能力。