Rusanov Branimir, Ebert Martin A, Mukwada Godfrey, Hassan Ghulam Mubashar, Sabet Mahsheed
School of Physics, Mathematics and Computing, The University of Western Australia, Australia.
Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Australia.
Phys Med Biol. 2021 Oct 22;66(21). doi: 10.1088/1361-6560/ac27b6.
Extending cone-beam CT (CBCT) use toward dose accumulation and adaptive radiotherapy (ART) necessitates more accurate HU reproduction since cone-beam geometries are heavily degraded by photon scatter. This study proposes a novel method which aims to demonstrate how deep learning based on phantom data can be used effectively for CBCT intensity correction in patient images. Four anthropomorphic phantoms were scanned on a CBCT and conventional fan-beam CT system. Intensity correction is performed by estimating the cone-beam intensity deviations from prior information contained in the CT. Residual projections were extracted by subtraction of raw cone-beam projections from virtual CT projections. An improved version of U-net is utilized to train on a total of 2001 projection pairs. Once trained, the network could estimate intensity deviations from input patient head and neck raw projections. The results from our novel method showed that corrected CBCT images improved the (contrast-to-noise ratio) with respect to uncorrected reconstructions by a factor of 2.08. The mean absolute error and structural similarity index improved from 318 HU to 74 HU and 0.750 to 0.812 respectively. Visual assessment based on line-profile measurements and difference image analysis indicate the proposed method reduced noise and the presence of beam-hardening artefacts compared to uncorrected and manufacturer reconstructions. Projection domain intensity correction for cone-beam acquisitions of patients was shown to be feasible using a convolutional neural network trained on phantom data. The method shows promise for further improvements which may eventually facilitate dose monitoring and ART in the clinical radiotherapy workflow.
将锥形束CT(CBCT)的应用扩展到剂量累积和自适应放射治疗(ART),需要更精确的HU值再现,因为锥形束几何结构会因光子散射而严重退化。本研究提出了一种新方法,旨在展示基于体模数据的深度学习如何有效地用于患者图像的CBCT强度校正。在CBCT和传统扇形束CT系统上对四个仿真人体体模进行扫描。通过从CT中包含的先验信息估计锥形束强度偏差来进行强度校正。通过从虚拟CT投影中减去原始锥形束投影来提取残余投影。利用改进版的U-net在总共2001对投影对上进行训练。一旦训练完成,该网络就能从输入的患者头部和颈部原始投影中估计强度偏差。我们新方法的结果表明,校正后的CBCT图像相对于未校正的重建图像,对比度噪声比提高了2.08倍。平均绝对误差和结构相似性指数分别从318 HU提高到74 HU,从0.750提高到0.812。基于线轮廓测量和差异图像分析的视觉评估表明,与未校正和制造商的重建图像相比,该方法减少了噪声和束硬化伪影的出现。使用在体模数据上训练的卷积神经网络,对患者的锥形束采集进行投影域强度校正被证明是可行的。该方法显示出进一步改进的前景,最终可能有助于临床放射治疗工作流程中的剂量监测和ART。