Tang Yucheng, Lee Ho Hin, Xu Yuchen, Tang Olivia, Chen Yunqiang, Gao Dashan, Han Shizhong, Gao Riqiang, Bermudez Camilo, Savona Michael R, Abramson Richard G, Huo Yuankai, Landman Bennett A
Department of Eletrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212.
12 Sigma Technology, San Diego, CA, USA 92130.
Proc SPIE Int Soc Opt Eng. 2020 Feb;11313. doi: 10.1117/12.2549438. Epub 2020 Mar 10.
Dynamic contrast enhanced computed tomography (CT) is an imaging technique that provides critical information on the relationship of vascular structure and dynamics in the context of underlying anatomy. A key challenge for image processing with contrast enhanced CT is that phase discrepancies are latent in different tissues due to contrast protocols, vascular dynamics, and metabolism variance. Previous studies with deep learning frameworks have been proposed for classifying contrast enhancement with networks inspired by computer vision. Here, we revisit the challenge in the context of whole abdomen contrast enhanced CTs. To capture and compensate for the complex contrast changes, we propose a novel discriminator in the form of a multi-domain disentangled representation learning network. The goal of this network is to learn an intermediate representation that separates contrast enhancement from anatomy and enables classification of images with varying contrast time. Briefly, our unpaired contrast disentangling GAN(CD-GAN) Discriminator follows the ResNet architecture to classify a CT scan from different enhancement phases. To evaluate the approach, we trained the enhancement phase classifier on 21060 slices from two clinical cohorts of 230 subjects. The scans were manually labeled with three independent enhancement phases (non-contrast, portal venous and delayed). Testing was performed on 9100 slices from 30 independent subjects who had been imaged with CT scans from all contrast phases. Performance was quantified in terms of the multi-class normalized confusion matrix. The proposed network significantly improved correspondence over baseline UNet, ResNet50 and StarGAN's performance of accuracy scores 0.54. 0.55, 0.62 and 0.91, respectively (p-value<0.0001 paired t-test for ResNet versus CD-GAN). The proposed discriminator from the disentangled network presents a promising technique that may allow deeper modeling of dynamic imaging against patient specific anatomies.
动态对比增强计算机断层扫描(CT)是一种成像技术,可在基础解剖结构的背景下提供有关血管结构和动力学关系的关键信息。对比增强CT图像处理面临的一个关键挑战是,由于对比剂注射方案、血管动力学和代谢差异,不同组织中存在相位差异。先前已有针对深度学习框架的研究,通过受计算机视觉启发的网络对对比增强进行分类。在此,我们在全腹部对比增强CT的背景下重新审视这一挑战。为了捕捉和补偿复杂的对比变化,我们提出了一种以多域解缠表征学习网络形式的新型鉴别器。该网络的目标是学习一种中间表征,将对比增强与解剖结构分离,并能够对具有不同对比时间的图像进行分类。简而言之,我们的非配对对比解缠生成对抗网络(CD-GAN)鉴别器遵循ResNet架构,对来自不同增强阶段的CT扫描进行分类。为了评估该方法,我们在来自230名受试者的两个临床队列的21060个切片上训练增强阶段分类器。扫描图像被手动标记为三个独立的增强阶段(非增强、门静脉期和延迟期)。对来自30名独立受试者的9100个切片进行测试,这些受试者已接受过所有对比阶段的CT扫描。性能通过多类归一化混淆矩阵进行量化。与基线UNet、ResNet50和StarGAN相比,所提出的网络显著提高了准确率,分别为0.54、0.55、0.62和0.91(ResNet与CD-GAN的配对t检验,p值<0.0001)。从解缠网络中提出的鉴别器呈现出一种有前景的技术,可能允许针对患者特定解剖结构对动态成像进行更深入的建模。