Baroudi Hana, Chen Xinru, Cao Wenhua, El Basha Mohammad D, Gay Skylar, Gronberg Mary Peters, Hernandez Soleil, Huang Kai, Kaffey Zaphanlene, Melancon Adam D, Mumme Raymond P, Sjogreen Carlos, Tsai January Y, Yu Cenji, Court Laurence E, Pino Ramiro, Zhao Yao
MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA.
Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA.
J Imaging. 2023 Nov 8;9(11):245. doi: 10.3390/jimaging9110245.
In this study, we aimed to enhance the contouring accuracy of cardiac pacemakers by improving their visualization using deep learning models to predict MV CBCT images based on kV CT or CBCT images. Ten pacemakers and four thorax phantoms were included, creating a total of 35 combinations. Each combination was imaged on a Varian Halcyon (kV/MV CBCT images) and Siemens SOMATOM CT scanner (kV CT images). Two generative adversarial network (GAN)-based models, cycleGAN and conditional GAN (cGAN), were trained to generate synthetic MV (sMV) CBCT images from kV CT/CBCT images using twenty-eight datasets (80%). The pacemakers in the sMV CBCT images and original MV CBCT images were manually delineated and reviewed by three users. The Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and mean surface distance (MSD) were used to compare contour accuracy. Visual inspection showed the improved visualization of pacemakers on sMV CBCT images compared to original kV CT/CBCT images. Moreover, cGAN demonstrated superior performance in enhancing pacemaker visualization compared to cycleGAN. The mean DSC, HD95, and MSD for contours on sMV CBCT images generated from kV CT/CBCT images were 0.91 ± 0.02/0.92 ± 0.01, 1.38 ± 0.31 mm/1.18 ± 0.20 mm, and 0.42 ± 0.07 mm/0.36 ± 0.06 mm using the cGAN model. Deep learning-based methods, specifically cycleGAN and cGAN, can effectively enhance the visualization of pacemakers in thorax kV CT/CBCT images, therefore improving the contouring precision of these devices.
在本研究中,我们旨在通过使用深度学习模型基于千伏计算机断层扫描(kV CT)或锥束计算机断层扫描(CBCT)图像预测兆伏级CBCT(MV CBCT)图像,来提高心脏起搏器的轮廓描绘准确性。纳入了十个起搏器和四个胸部体模,共创建了35种组合。每种组合在瓦里安Halcyon(kV/MV CBCT图像)和西门子SOMATOM CT扫描仪(kV CT图像)上进行成像。使用二十八个数据集(80%)训练了两种基于生成对抗网络(GAN)的模型,即循环生成对抗网络(cycleGAN)和条件生成对抗网络(cGAN),以从kV CT/CBCT图像生成合成MV(sMV)CBCT图像。由三名用户对sMV CBCT图像和原始MV CBCT图像中的起搏器进行手动勾勒和评估。使用骰子相似系数(DSC)、95%豪斯多夫距离(HD95)和平均表面距离(MSD)来比较轮廓准确性。视觉检查显示,与原始kV CT/CBCT图像相比,sMV CBCT图像上起搏器的可视化效果有所改善。此外,与cycleGAN相比,cGAN在增强起搏器可视化方面表现出更优的性能。使用cGAN模型时,从kV CT/CBCT图像生成的sMV CBCT图像上轮廓的平均DSC、HD95和MSD分别为0.91±0.02/0.92±0.01、1.38±0.31毫米/1.18±0.20毫米和0.42±0.07毫米/0.36±0.06毫米。基于深度学习的方法,特别是cycleGAN和cGAN,可以有效地增强胸部kV CT/CBCT图像中起搏器的可视化效果,从而提高这些设备的轮廓描绘精度。