Tang Yucheng, Gao Riqiang, Lee Ho Hin, Chen Yunqiang, Gao Dashan, Bermudez Camilo, Bao Shunxing, Huo Yuankai, Savoie Brent V, Landman Bennett A
Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA.
12 Sigma Technologies, San Diego, CA, 92130, USA.
Med Phys. 2021 Mar;48(3):1276-1285. doi: 10.1002/mp.14706. Epub 2021 Jan 27.
Dynamic contrast-enhanced computed tomography (CT) is widely used to provide dynamic tissue contrast for diagnostic investigation and vascular identification. However, the phase information of contrast injection is typically recorded manually by technicians, which introduces missing or mislabeling. Hence, imaging-based contrast phase identification is appealing, but challenging, due to large variations among different contrast protocols, vascular dynamics, and metabolism, especially for clinically acquired CT scans. The purpose of this study is to perform imaging-based phase identification for dynamic abdominal CT using a proposed adversarial learning framework across five representative contrast phases.
A generative adversarial network (GAN) is proposed as a disentangled representation learning model. To explicitly model different contrast phases, a low dimensional common representation and a class specific code are fused in the hidden layer. Then, the low dimensional features are reconstructed following a discriminator and classifier. 36 350 slices of CT scans from 400 subjects are used to evaluate the proposed method with fivefold cross-validation with splits on subjects. Then, 2216 slices images from 20 independent subjects are employed as independent testing data, which are evaluated using multiclass normalized confusion matrix.
The proposed network significantly improved correspondence (0.93) over VGG, ResNet50, StarGAN, and 3DSE with accuracy scores 0.59, 0.62, 0.72, and 0.90, respectively (P < 0.001 Stuart-Maxwell test for normalized multiclass confusion matrix).
We show that adversarial learning for discriminator can be benefit for capturing contrast information among phases. The proposed discriminator from the disentangled network achieves promising results.
动态对比增强计算机断层扫描(CT)被广泛用于提供动态组织对比度以进行诊断研究和血管识别。然而,对比剂注射的相位信息通常由技术人员手动记录,这会导致信息缺失或标记错误。因此,基于成像的对比剂相位识别很有吸引力,但具有挑战性,因为不同的对比剂方案、血管动力学和新陈代谢之间存在很大差异,特别是对于临床获取的CT扫描。本研究的目的是使用提出的对抗学习框架对动态腹部CT进行基于成像的相位识别,跨越五个代表性对比剂相位。
提出了一种生成对抗网络(GAN)作为解缠表示学习模型。为了明确地对不同的对比剂相位进行建模,在隐藏层中融合了低维公共表示和特定类代码。然后,根据鉴别器和分类器重建低维特征。使用来自400名受试者的36350层CT扫描,通过对受试者进行五折交叉验证来评估所提出的方法。然后,将来自20名独立受试者的2216层图像用作独立测试数据,使用多类归一化混淆矩阵进行评估。
与VGG、ResNet50、StarGAN和3DSE相比,所提出的网络显著提高了对应度(0.93),准确率分别为0.59、0.62、0.72和0.90(对于归一化多类混淆矩阵的P < 0.001 Stuart-Maxwell检验)。
我们表明,鉴别器的对抗学习有助于捕获各相位之间的对比信息。所提出的解缠网络中的鉴别器取得了有希望的结果。