Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:2843-2846. doi: 10.1109/EMBC46164.2021.9629952.
Artifacts and defects in Cone-beam Computed Tomography (CBCT) images are a problem in radiotherapy and surgical procedures. Unsupervised learning-based image translation techniques have been studied to improve the image quality of head and neck CBCT images, but there have been few studies on improving the image quality of abdominal CBCT images, which are strongly affected by organ deformation due to posture and breathing. In this study, we propose a method for improving the image quality of abdominal CBCT images by translating the numerical values to the values of corresponding paired CT images using an unsupervised CycleGAN framework. This method preserves anatomical structure through adversarial learning that translates voxel values according to corresponding regions between CBCT and CT images of the same case. The image translation model was trained on 68 CT-CBCT datasets and then applied to 8 test datasets, and the effectiveness of the proposed method for improving the image quality of CBCT images was confirmed.
锥形束计算机断层扫描(CBCT)图像中的伪影和缺陷是放射治疗和手术过程中的一个问题。已经研究了基于无监督学习的图像翻译技术来提高头颈部 CBCT 图像的质量,但很少有研究致力于提高腹部 CBCT 图像的质量,因为腹部 CBCT 图像强烈受到由于姿势和呼吸引起的器官变形的影响。在这项研究中,我们提出了一种使用无监督 CycleGAN 框架将数值转换为相应配对 CT 图像值的方法,以提高腹部 CBCT 图像的质量。该方法通过对抗性学习来保留解剖结构,根据同一病例的 CBCT 和 CT 图像之间的对应区域来转换体素值。图像翻译模型在 68 个 CT-CBCT 数据集上进行了训练,然后应用于 8 个测试数据集,证实了该方法提高 CBCT 图像质量的有效性。