Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA.
Department of Radiology, University of Iowa, Iowa City, Iowa, USA.
Med Phys. 2023 Sep;50(9):5698-5714. doi: 10.1002/mp.16365. Epub 2023 Mar 26.
Chest computed tomography (CT) enables characterization of pulmonary diseases by producing high-resolution and high-contrast images of the intricate lung structures. Deformable image registration is used to align chest CT scans at different lung volumes, yielding estimates of local tissue expansion and contraction.
We investigated the utility of deep generative models for directly predicting local tissue volume change from lung CT images, bypassing computationally expensive iterative image registration and providing a method that can be utilized in scenarios where either one or two CT scans are available.
A residual regression convolutional neural network, called Reg3DNet+, is proposed for directly regressing high-resolution images of local tissue volume change (i.e., Jacobian) from CT images. Image registration was performed between lung volumes at total lung capacity (TLC) and functional residual capacity (FRC) using a tissue mass- and structure-preserving registration algorithm. The Jacobian image was calculated from the registration-derived displacement field and used as the ground truth for local tissue volume change. Four separate Reg3DNet+ models were trained to predict Jacobian images using a multifactorial study design to compare the effects of network input (i.e., single image vs. paired images) and output space (i.e., FRC vs. TLC). The models were trained and evaluated on image datasets from the COPDGene study. Models were evaluated against the registration-derived Jacobian images using local, regional, and global evaluation metrics.
Statistical analysis revealed that both factors - network input and output space - were significant determinants for change in evaluation metrics. Paired-input models performed better than single-input models, and model performance was better in the output space of FRC rather than TLC. Mean structural similarity index for paired-input models was 0.959 and 0.956 for FRC and TLC output spaces, respectively, and for single-input models was 0.951 and 0.937. Global evaluation metrics demonstrated correlation between registration-derived Jacobian mean and predicted Jacobian mean: coefficient of determination (r ) for paired-input models was 0.974 and 0.938 for FRC and TLC output spaces, respectively, and for single-input models was 0.598 and 0.346. After correcting for effort, registration-derived lobar volume change was strongly correlated with the predicted lobar volume change: for paired-input models r was 0.899 for both FRC and TLC output spaces, and for single-input models r was 0.803 and 0.862, respectively.
Convolutional neural networks can be used to directly predict local tissue mechanics, eliminating the need for computationally expensive image registration. Networks that use paired CT images acquired at TLC and FRC allow for more accurate prediction of local tissue expansion compared to networks that use a single image. Networks that only require a single input image still show promising results, particularly after correcting for effort, and allow for local tissue expansion estimation in cases where multiple CT scans are not available. For single-input networks, the FRC image is more predictive of local tissue volume change compared to the TLC image.
胸部计算机断层扫描(CT)通过生成复杂肺部结构的高分辨率和高对比度图像来对肺部疾病进行特征描述。可变形图像配准用于对齐不同肺容量的胸部 CT 扫描,从而得出局部组织扩张和收缩的估计值。
我们研究了深度生成模型在直接从肺部 CT 图像预测局部组织体积变化方面的应用,避免了计算成本高昂的迭代图像配准,提供了一种可用于只有一个或两个 CT 扫描可用的情况下的方法。
提出了一种称为 Reg3DNet+的残差回归卷积神经网络,用于直接从 CT 图像回归局部组织体积变化的高分辨率图像(即雅可比)。使用一种保持组织质量和结构的配准算法,在总肺容量(TLC)和功能残气量(FRC)之间的肺容积之间进行图像配准。从配准得到的位移场中计算出雅可比图像,并将其用作局部组织体积变化的真实值。使用多因素研究设计训练了四个单独的 Reg3DNet+模型,以比较网络输入(即单图像与配对图像)和输出空间(即 FRC 与 TLC)对预测雅可比图像的影响。模型在 COPDGene 研究的图像数据集上进行了训练和评估。使用局部、区域和全局评估指标,根据配准得到的雅可比图像评估模型。
统计分析表明,网络输入和输出空间这两个因素都是评估指标变化的重要决定因素。配对输入模型的性能优于单输入模型,在 FRC 输出空间的性能优于 TLC。配对输入模型的平均结构相似性指数分别为 0.959 和 0.956,用于 FRC 和 TLC 输出空间,而单输入模型的平均结构相似性指数分别为 0.951 和 0.937。全局评估指标表明,配准得到的雅可比均值与预测的雅可比均值之间存在相关性:配对输入模型的决定系数(r)分别为 0.974 和 0.938,用于 FRC 和 TLC 输出空间,而单输入模型的决定系数(r)分别为 0.598 和 0.346。在校正努力后,配准得到的叶体积变化与预测的叶体积变化具有很强的相关性:对于配对输入模型,r 分别为 0.899,用于 FRC 和 TLC 输出空间,而对于单输入模型,r 分别为 0.803 和 0.862。
卷积神经网络可用于直接预测局部组织力学,无需进行计算成本高昂的图像配准。与使用 TLC 和 FRC 采集的单个 CT 图像的网络相比,使用配对 CT 图像的网络可以更准确地预测局部组织扩张。仅需要单个输入图像的网络仍显示出有希望的结果,特别是在校正努力后,并且允许在没有多个 CT 扫描的情况下估计局部组织扩张。对于单输入网络,与 TLC 图像相比,FRC 图像更能预测局部组织体积变化。