From the Departments of Electrical and Computer Engineering (L.U., A.B.).
Medical Imaging (L.U., B.W., L.M., M.H., M.I.A., A.B.).
AJNR Am J Neuroradiol. 2020 Jun;41(6):1061-1069. doi: 10.3174/ajnr.A6538. Epub 2020 May 21.
Fast and accurate quantification of globe volumes in the event of an ocular trauma can provide clinicians with valuable diagnostic information. In this work, an automated workflow using a deep learning-based convolutional neural network is proposed for prediction of globe contours and their subsequent volume quantification in CT images of the orbits.
An automated workflow using a deep learning -based convolutional neural network is proposed for prediction of globe contours in CT images of the orbits. The network, 2D Modified Residual UNET (MRes-UNET2D), was trained on axial CT images from 80 subjects with no imaging or clinical findings of globe injuries. The predicted globe contours and volume estimates were compared with manual annotations by experienced observers on 2 different test cohorts.
On the first test cohort ( = 18), the average Dice, precision, and recall scores were 0.95, 96%, and 95%, respectively. The average 95% Hausdorff distance was only 1.5 mm, with a 5.3% error in globe volume estimates. No statistically significant differences ( = .72) were observed in the median globe volume estimates from our model and the ground truth. On the second test cohort ( = 9) in which a neuroradiologist and 2 residents independently marked the globe contours, MRes-UNET2D (Dice = 0.95) approached human interobserver variability (Dice = 0.94). We also demonstrated the utility of inter-globe volume difference as a quantitative marker for trauma in 3 subjects with known globe injuries.
We showed that with fast prediction times, we can reliably detect and quantify globe volumes in CT images of the orbits across a variety of acquisition parameters.
在眼外伤的情况下,快速准确地量化眼球体积可以为临床医生提供有价值的诊断信息。在这项工作中,我们提出了一种使用基于深度学习的卷积神经网络的自动化工作流程,用于预测眼眶 CT 图像中的眼球轮廓及其随后的体积量化。
我们提出了一种使用基于深度学习的卷积神经网络的自动化工作流程,用于预测眼眶 CT 图像中的眼球轮廓。该网络为 2D 改进型残差 UNET(MRes-UNET2D),是在 80 名无眼球损伤影像学或临床发现的受试者的轴位 CT 图像上进行训练的。预测的眼球轮廓和体积估计值与有经验的观察者在两个不同的测试队列上的手动注释进行了比较。
在第一个测试队列(n = 18)中,平均 Dice、精度和召回率分别为 0.95、96%和 95%。平均 95%Hausdorff 距离仅为 1.5 毫米,眼球体积估计值的误差为 5.3%。从我们的模型和真实值中位数的眼球体积估计值来看,没有观察到统计学上的显著差异(P =.72)。在第二个测试队列(n = 9)中,神经放射科医生和 2 名住院医师独立标记了眼球轮廓,MRes-UNET2D(Dice = 0.95)接近人类观察者间的可变性(Dice = 0.94)。我们还证明了眼球间体积差异作为已知眼球损伤 3 名受试者创伤的定量标志物的效用。
我们表明,通过快速预测时间,我们可以在各种采集参数的眼眶 CT 图像中可靠地检测和量化眼球体积。