Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA.
Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA.
Med Phys. 2020 Apr;47(4):1702-1712. doi: 10.1002/mp.14055. Epub 2020 Feb 19.
Gadoxetic acid uptake rate (k ) obtained from dynamic, contrast-enhanced (DCE) magnetic resonance imaging (MRI) is a promising measure of regional liver function. Clinical exams are typically poorly temporally characterized, as seen in a low temporal resolution (LTR) compared to high temporal resolution (HTR) experimental acquisitions. Meanwhile, clinical demands incentivize shortening these exams. This study develops a neural network-based approach to quantitation of k , for increased robustness over current models such as the linearized single-input, two-compartment (LSITC) model.
Thirty Liver HTR DCE MRI exams were acquired in 22 patients with at least 16 min of postcontrast data sampled at least every 13 s. A simple neural network (NN) with four hidden layers was trained on voxel-wise LTR data to predict k . Low temporal resolution data were created by subsampling HTR data to contain six time points, replicating the characteristics of clinical LTR data. Both the total length and the placement of points in the training data were varied considerably to encourage robustness to variation. A generative adversarial network (GAN) was used to generate arterial and portal venous inputs for use in data augmentation based on the dual-input, two-compartment, pharmacokinetic model of gadoxetic acid in the liver. The performance of the NN was compared to direct application of LSITC on both LTR and HTR data. The error was assessed when subsampling lengths from 16 to 4 min, enabling assessment of robustness to acquisition length.
For acquisition lengths of 16 min NRMSE (Normalized Root-Mean-Squared Error) in k was 0.60, 1.77, and 1.21, for LSITC applied to HTR data, LSITC applied to LTR data, and GAN-augmented NN applied to LTR data, respectively. As the acquisition length was shortened, errors greatly increased for LSITC approaches by several folds. For acquisitions shorter than 12 min the GAN-augmented NN approach outperformed the LSITC approach to a statistically significant extent, even with HTR data.
The study indicates that data length is significant for LSITC analysis as applied to DCE data for standard temporal sampling, and that machine learning methods, such as the implemented NN, have potential for much greater resilience to shortened acquisition time than directly fitting to the LSITC model.
从动态对比增强磁共振成像(DCE MRI)获得的钆塞酸摄取率(k)是一种有前途的区域性肝功能测量方法。临床检查通常在时间特征上很差,与高时间分辨率(HTR)实验采集相比,时间分辨率较低(LTR)。同时,临床需求促使这些检查时间缩短。本研究开发了一种基于神经网络的方法来定量 k,以提高当前模型(如线性单输入、双室(LSITC)模型)的稳健性。
在 22 名患者中采集了 30 例肝脏 HTR DCE MRI 检查,至少有 16 分钟的对比后数据,以至少每 13 秒的间隔进行采样。一个简单的神经网络(NN)有四个隐藏层,在体素水平 LTR 数据上进行训练,以预测 k。通过对 HTR 数据进行采样以包含 6 个时间点来创建低时间分辨率数据,从而复制临床 LTR 数据的特征。训练数据中的总长度和点的位置都有很大的变化,以鼓励对变化的稳健性。使用生成对抗网络(GAN)基于钆塞酸在肝脏中的双输入、双室、药代动力学模型生成动脉和门静脉输入,用于数据增强。将 NN 的性能与直接应用于 LTR 和 HTR 数据的 LSITC 进行了比较。当从 16 分钟缩短到 4 分钟时,评估了采样长度的误差,从而可以评估对采集长度的稳健性。
对于 16 分钟的采集长度,LSITC 应用于 HTR 数据、LSITC 应用于 LTR 数据和 GAN 增强 NN 应用于 LTR 数据的 k 的 NRMSE(归一化均方根误差)分别为 0.60、1.77 和 1.21。随着采集长度的缩短,LSITC 方法的误差大大增加了几倍。对于短于 12 分钟的采集,即使使用 HTR 数据,GAN 增强 NN 方法也显著优于 LSITC 方法。
该研究表明,对于标准时间采样的 DCE 数据,LSITC 分析的数据长度对 LSITC 分析非常重要,并且机器学习方法(如实施的 NN)具有比直接拟合 LSITC 模型更大的缩短采集时间的潜力。