Yalcinkaya Dilek M, Youssef Khalid, Heydari Bobak, Simonetti Orlando, Dharmakumar Rohan, Raman Subha, Sharif Behzad
Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine (IUSM), Indianapolis, IN, USA.
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA.
Med Image Comput Comput Assist Interv. 2023 Oct;14222:453-462. doi: 10.1007/978-3-031-43898-1_44. Epub 2023 Oct 1.
Dynamic contrast-enhanced (DCE) cardiac magnetic resonance imaging (CMRI) is a widely used modality for diagnosing myocardial blood flow (perfusion) abnormalities. During a typical free-breathing DCE-CMRI scan, close to 300 time-resolved images of myocardial perfusion are acquired at various contrast "wash in/out" phases. Manual segmentation of myocardial contours in each time-frame of a DCE image series can be tedious and time-consuming, particularly when non-rigid motion correction has failed or is unavailable. While deep neural networks (DNNs) have shown promise for analyzing DCE-CMRI datasets, a "dynamic quality control" (dQC) technique for reliably detecting failed segmentations is lacking. Here we propose a new space-time uncertainty metric as a dQC tool for DNN-based segmentation of free-breathing DCE-CMRI datasets by validating the proposed metric on an external dataset and establishing a human-in-the-loop framework to improve the segmentation results. In the proposed approach, we referred the top 10% most uncertain segmentations as detected by our dQC tool to the human expert for refinement. This approach resulted in a significant increase in the Dice score (p < 0.001) and a notable decrease in the number of images with failed segmentation (16.2% to 11.3%) whereas the alternative approach of randomly selecting the same number of segmentations for human referral did not achieve any significant improvement. Our results suggest that the proposed dQC framework has the potential to accurately identify poor-quality segmentations and may enable efficient DNN-based analysis of DCE-CMRI in a human-in-the-loop pipeline for clinical interpretation and reporting of dynamic CMRI datasets.
动态对比增强(DCE)心脏磁共振成像(CMRI)是一种广泛用于诊断心肌血流(灌注)异常的模态。在典型的自由呼吸DCE-CMRI扫描过程中,在不同的对比剂“流入/流出”阶段采集近300幅心肌灌注的时间分辨图像。在DCE图像序列的每个时间帧中手动分割心肌轮廓可能既繁琐又耗时,特别是当非刚性运动校正失败或不可用时。虽然深度神经网络(DNN)在分析DCE-CMRI数据集方面已显示出前景,但缺乏一种用于可靠检测分割失败的“动态质量控制”(dQC)技术。在此,我们提出一种新的时空不确定性度量,作为一种dQC工具,用于基于DNN对自由呼吸DCE-CMRI数据集进行分割,通过在外部数据集上验证所提出的度量,并建立一个人工参与的框架来改善分割结果。在所提出的方法中,我们将由dQC工具检测到的最不确定的前10%分割结果提交给人类专家进行优化。这种方法导致Dice分数显著提高(p < 0.001),且分割失败的图像数量显著减少(从16.2%降至11.3%),而随机选择相同数量的分割结果供人类参考的替代方法并未取得任何显著改善。我们的结果表明,所提出的dQC框架有潜力准确识别低质量分割,并可能在人工参与的流程中实现基于DNN的高效DCE-CMRI分析,用于动态CMRI数据集的临床解释和报告。