Estrada Santiago, Lu Ran, Conjeti Sailesh, Orozco-Ruiz Ximena, Panos-Willuhn Joana, Breteler Monique M B, Reuter Martin
Image Analysis, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany.
Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany.
Magn Reson Med. 2020 Apr;83(4):1471-1483. doi: 10.1002/mrm.28022. Epub 2019 Oct 21.
Introduce and validate a novel, fast, and fully automated deep learning pipeline (FatSegNet) to accurately identify, segment, and quantify visceral and subcutaneous adipose tissue (VAT and SAT) within a consistent, anatomically defined abdominal region on Dixon MRI scans.
FatSegNet is composed of three stages: (a) Consistent localization of the abdominal region using two 2D-Competitive Dense Fully Convolutional Networks (CDFNet), (b) Segmentation of adipose tissue on three views by independent CDFNets, and (c) View aggregation. FatSegNet is validated by: (1) comparison of segmentation accuracy (sixfold cross-validation), (2) test-retest reliability, (3) generalizability to randomly selected manually re-edited cases, and (4) replication of age and sex effects in the Rhineland Study-a large prospective population cohort.
The CDFNet demonstrates increased accuracy and robustness compared to traditional deep learning networks. FatSegNet Dice score outperforms manual raters on VAT (0.850 vs. 0.788) and produces comparable results on SAT (0.975 vs. 0.982). The pipeline has excellent agreement for both test-retest (ICC VAT 0.998 and SAT 0.996) and manual re-editing (ICC VAT 0.999 and SAT 0.999).
FatSegNet generalizes well to different body shapes, sensitively replicates known VAT and SAT volume effects in a large cohort study and permits localized analysis of fat compartments. Furthermore, it can reliably analyze a 3D Dixon MRI in ∼1 minute, providing an efficient and validated pipeline for abdominal adipose tissue analysis in the Rhineland Study.
引入并验证一种新型、快速且全自动的深度学习管道(FatSegNet),以在迪克森MRI扫描的一致、解剖学定义的腹部区域内准确识别、分割和量化内脏及皮下脂肪组织(VAT和SAT)。
FatSegNet由三个阶段组成:(a)使用两个二维竞争性密集全卷积网络(CDFNet)对腹部区域进行一致定位,(b)通过独立的CDFNet在三个视图上分割脂肪组织,以及(c)视图聚合。通过以下方式对FatSegNet进行验证:(1)分割准确性比较(六折交叉验证),(2)重测信度,(3)对随机选择的手动重新编辑病例的可推广性,以及(4)在莱茵兰研究(一项大型前瞻性人群队列研究)中复制年龄和性别效应。
与传统深度学习网络相比,CDFNet显示出更高的准确性和鲁棒性。FatSegNet在VAT上的骰子系数得分优于人工评分者(0.850对0.788),在SAT上产生了可比的结果(0.975对0.982)。该管道在重测(VAT的组内相关系数为0.998,SAT为0.996)和手动重新编辑(VAT的组内相关系数为0.999,SAT为0.999)方面都具有出色的一致性。
FatSegNet能很好地推广到不同体型,在一项大型队列研究中灵敏地复制了已知的VAT和SAT体积效应,并允许对脂肪隔室进行局部分析。此外,它能在约1分钟内可靠地分析三维迪克森MRI,为莱茵兰研究中的腹部脂肪组织分析提供了一种高效且经过验证的管道。