Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy.
Bioengineering Unit, Clinical Department, National Center for Oncological Hadrontherapy (CNAO), Pavia, Italy.
Med Phys. 2021 Nov;48(11):7112-7126. doi: 10.1002/mp.15282. Epub 2021 Oct 26.
Cone beam computed tomography (CBCT) is a standard solution for in-room image guidance for radiation therapy. It is used to evaluate and compensate for anatomopathological changes between the dose delivery plan and the fraction delivery day. CBCT is a fast and versatile solution, but it suffers from drawbacks like low contrast and requires proper calibration to derive density values. Although these limitations are even more prominent with in-room customized CBCT systems, strategies based on deep learning have shown potential in improving image quality. As such, this article presents a method based on a convolutional neural network and a novel two-step supervised training based on the transfer learning paradigm for shading correction in CBCT volumes with narrow field of view (FOV) acquired with an ad hoc in-room system.
We designed a U-Net convolutional neural network, trained on axial slices of corresponding CT/CBCT couples. To improve the generalization capability of the network, we exploited two-stage learning using two distinct data sets. At first, the network weights were trained using synthetic CBCT scans generated from a public data set, and then only the deepest layers of the network were trained again with real-world clinical data to fine-tune the weights. Synthetic data were generated according to real data acquisition parameters. The network takes a single grayscale volume as input and outputs the same volume with corrected shading and improved HU values.
Evaluation was carried out with a leave-one-out cross-validation, computed on 18 unique CT/CBCT pairs from six different patients from a real-world dataset. Comparing original CBCT to CT and improved CBCT to CT, we obtained an average improvement of 6 dB on peak signal-to-noise ratio (PSNR), +2% on structural similarity index measure (SSIM). The median interquartile range (IQR) Hounsfield unit (HU) difference between CBCT and CT improved from 161.37 (162.54) HU to 49.41 (66.70) HU. Region of interest (ROI)-based HU difference was narrowed by 75% in the spongy bone (femoral head), 89% in the bladder, 85% for fat, and 83% for muscle. The improvement in contrast-to-noise ratio for these ROIs was about 67%.
We demonstrated that shading correction obtaining CT-compatible data from narrow-FOV CBCTs acquired with a customized in-room system is possible. Moreover, the transfer learning approach proved particularly beneficial for such a shading correction approach.
锥形束计算机断层扫描(CBCT)是放射治疗中房间内图像引导的标准解决方案。它用于评估和补偿剂量输送计划与分次输送日之间的解剖病理学变化。CBCT 是一种快速且多功能的解决方案,但它存在对比度低和需要适当校准以得出密度值等缺点。尽管这些限制在房间内定制 CBCT 系统中更为突出,但基于深度学习的策略已显示出改善图像质量的潜力。因此,本文提出了一种基于卷积神经网络的方法,以及一种基于迁移学习范例的新颖两步监督训练方法,用于校正具有窄视场(FOV)的专用房间内系统采集的 CBCT 体的阴影。
我们设计了一个 U-Net 卷积神经网络,在相应的 CT/CBCT 对的轴位切片上进行训练。为了提高网络的泛化能力,我们利用了两阶段学习,使用两个不同的数据集。首先,使用来自公共数据集的合成 CBCT 扫描对网络权重进行训练,然后使用真实世界的临床数据再次对网络的最深层进行训练,以微调权重。合成数据是根据实际数据采集参数生成的。该网络以单个灰度体积作为输入,并输出具有校正阴影和改进 HU 值的相同体积。
使用来自真实数据集的六个不同患者的 18 对独特的 CT/CBCT 进行了留一交叉验证评估。将原始 CBCT 与 CT 进行比较,将改进的 CBCT 与 CT 进行比较,我们在峰值信噪比(PSNR)上获得了 6dB 的平均改善,结构相似性指数测量(SSIM)提高了+2%。CBCT 与 CT 之间的中位数四分位距(IQR)亨氏单位(HU)差值从 161.37(162.54)HU 改善至 49.41(66.70)HU。感兴趣区域(ROI)基础 HU 差值在海绵骨(股骨头)中缩小了 75%,在膀胱中缩小了 89%,在脂肪中缩小了 85%,在肌肉中缩小了 83%。这些 ROI 的对比噪声比提高了约 67%。
我们证明了从定制房间内系统采集的具有窄 FOV 的 CBCT 中获得 CT 兼容数据进行阴影校正是可能的。此外,迁移学习方法对于这种阴影校正方法特别有益。