Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:3933-3937. doi: 10.1109/EMBC46164.2021.9630110.
Individuals with obesity have larger amounts of visceral (VAT) and subcutaneous adipose tissue (SAT) in their body, increasing the risk for cardiometabolic diseases. The reference standard to quantify SAT and VAT uses manual annotations of magnetic resonance images (MRI), which requires expert knowledge and is time-consuming. Although there have been studies investigating deep learning-based methods for automated SAT and VAT segmentation, the performance for VAT remains suboptimal (Dice scores of 0.43 to 0.89). Previous work had key limitations of not fully considering the multi-contrast information from MRI and the 3D anatomical context, which are critical for addressing the complex spatially varying structure of VAT. An additional challenge is the imbalance between the number and distribution of pixels representing SAT/VAT. This work proposes a network based on 3D U-Net that utilizes the full field-of-view volumetric T-weighted, water, and fat images from dual-echo Dixon MRI as the multi-channel input to automatically segment SAT and VAT in adults with overweight/obesity. In addition, this work extends the 3D U-Net to a new Attention-based Competitive Dense 3D U-Net (ACD 3D U-Net) trained with a class frequency-balancing Dice loss (FBDL). In an initial testing dataset, the proposed 3D U-Net and ACD 3D U-Net with FBDL achieved 3D Dice scores of (mean ± standard deviation) 0.99 ±0.01 and 0.99±0.01 for SAT, and 0.95±0.04 and 0.96 ±0.04 for VAT, respectively, compared to manual annotations. The proposed 3D networks had rapid inference time (<60 ms/slice) and can enable automated segmentation of SAT and VAT.Clinical relevance- This work developed 3D neural networks to automatically, accurately, and rapidly segment visceral and subcutaneous adipose tissue on MRI, which can help to characterize the risk for cardiometabolic diseases such as diabetes, elevated glucose levels, and hypertension.
肥胖个体的体内存在更多的内脏(VAT)和皮下脂肪组织(SAT),这增加了患心脏代谢疾病的风险。量化 SAT 和 VAT 的参考标准是使用磁共振成像(MRI)的手动注释,这需要专业知识并且耗时。尽管已经有研究探讨了基于深度学习的自动 SAT 和 VAT 分割方法,但 VAT 的性能仍然不理想(Dice 分数为 0.43 到 0.89)。以前的工作存在关键限制,即没有充分考虑 MRI 的多对比度信息和 3D 解剖结构,这对于解决 VAT 的复杂空间变化结构至关重要。另一个挑战是代表 SAT/VAT 的像素数量和分布之间的不平衡。这项工作提出了一种基于 3D U-Net 的网络,该网络利用来自双回波 Dixon MRI 的全视野容积 T 加权、水和脂肪图像作为多通道输入,自动分割超重/肥胖成年人的 SAT 和 VAT。此外,这项工作将 3D U-Net 扩展到一种新的基于注意力的竞争密集型 3D U-Net(ACD 3D U-Net),该网络使用具有类别频率平衡 Dice 损失(FBDL)的训练。在初始测试数据集上,所提出的 3D U-Net 和具有 FBDL 的 ACD 3D U-Net 分别实现了 SAT 的 3D Dice 分数(平均值±标准偏差)为 0.99±0.01 和 0.99±0.01,以及 VAT 的 0.95±0.04 和 0.96±0.04,与手动注释相比。所提出的 3D 网络具有快速推断时间(<60 ms/slice),并且可以实现 SAT 和 VAT 的自动分割。临床意义-这项工作开发了 3D 神经网络,可以自动、准确、快速地对 MRI 上的内脏和皮下脂肪组织进行分割,这有助于描述心脏代谢疾病(如糖尿病、血糖升高和高血压)的风险。