Suppr超能文献

使用 2D U-Net 卷积神经网络对膝关节 MRI 数据进行自动软骨和半月板分割以确定弛豫度和形态测量学。

Use of 2D U-Net Convolutional Neural Networks for Automated Cartilage and Meniscus Segmentation of Knee MR Imaging Data to Determine Relaxometry and Morphometry.

机构信息

From the Department of Radiology and Biomedical Imaging and Center for Digital Health Innovation (CDHI), University of California, San Francisco, 1700 Fourth St, Suite 201, QB3 Building, San Francisco, CA 94107.

出版信息

Radiology. 2018 Jul;288(1):177-185. doi: 10.1148/radiol.2018172322. Epub 2018 Mar 27.

Abstract

Purpose To analyze how automatic segmentation translates in accuracy and precision to morphology and relaxometry compared with manual segmentation and increases the speed and accuracy of the work flow that uses quantitative magnetic resonance (MR) imaging to study knee degenerative diseases such as osteoarthritis (OA). Materials and Methods This retrospective study involved the analysis of 638 MR imaging volumes from two data cohorts acquired at 3.0 T: (a) spoiled gradient-recalled acquisition in the steady state T1-weighted images and (b) three-dimensional (3D) double-echo steady-state (DESS) images. A deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. Cartilage and meniscus compartments were manually segmented by skilled technicians and radiologists for comparison. Performance of the automatic segmentation was evaluated on Dice coefficient overlap with the manual segmentation, as well as by the automatic segmentations' ability to quantify, in a longitudinally repeatable way, relaxometry and morphology. Results The models produced strong Dice coefficients, particularly for 3D-DESS images, ranging between 0.770 and 0.878 in the cartilage compartments to 0.809 and 0.753 for the lateral meniscus and medial meniscus, respectively. The models averaged 5 seconds to generate the automatic segmentations. Average correlations between manual and automatic quantification of T1 and T2 values were 0.8233 and 0.8603, respectively, and 0.9349 and 0.9384 for volume and thickness, respectively. Longitudinal precision of the automatic method was comparable with that of the manual one. Conclusion U-Net demonstrates efficacy and precision in quickly generating accurate segmentations that can be used to extract relaxation times and morphologic characterization and values that can be used in the monitoring and diagnosis of OA. RSNA, 2018 Online supplemental material is available for this article.

摘要

目的 分析自动分割在准确性和精密度方面与手动分割相比如何转化为形态和弛豫率,并提高使用定量磁共振(MR)成像研究膝退行性疾病(如骨关节炎[OA])的工作流程的速度和准确性。

材料和方法 本回顾性研究分析了在 3.0 T 下采集的两个数据队列的 638 个 MR 成像容积:(a)稳态 T1 加权梯度回波获取的扰动脉冲和(b)三维(3D)双回波稳态(DESS)图像。开发了一种基于 U-Net 卷积网络架构的深度学习模型来进行自动分割。软骨和半月板腔由熟练的技术人员和放射科医生进行手动分割以进行比较。通过与手动分割的 Dice 系数重叠以及自动分割以可重复的方式定量弛豫率和形态,评估自动分割的性能。

结果 该模型产生了很强的 Dice 系数,特别是对于 3D-DESS 图像,在软骨隔室中范围在 0.770 到 0.878 之间,而在外侧半月板和内侧半月板中分别为 0.809 和 0.753。模型平均生成自动分割需要 5 秒。手动和自动定量 T1 和 T2 值的平均相关性分别为 0.8233 和 0.8603,以及体积和厚度的 0.9349 和 0.9384。自动方法的纵向精度与手动方法相当。

结论 U-Net 在快速生成准确分割方面具有高效性和精度,可用于提取弛豫时间和形态特征以及可用于 OA 监测和诊断的值。

RSNA,2018 在线补充材料可用于本文。

相似文献

引用本文的文献

本文引用的文献

1
Fully Convolutional Architectures for Multiclass Segmentation in Chest Radiographs.基于全卷积架构的胸部 X 光片多类分割。
IEEE Trans Med Imaging. 2018 Aug;37(8):1865-1876. doi: 10.1109/TMI.2018.2806086. Epub 2018 Feb 26.
5
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.SegNet:一种用于图像分割的深度卷积编解码器架构。
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2481-2495. doi: 10.1109/TPAMI.2016.2644615. Epub 2017 Jan 2.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验