Hu Rui, Yan Hui, Nian Fudong, Mao Ronghu, Li Teng
Key Laboratory of Intelligent Computing and Signal Processing, Ministry of Education/School of Artificial Intelligence, Anhui University, Hefei, China.
Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
Quant Imaging Med Surg. 2022 Jul;12(7):3705-3716. doi: 10.21037/qims-21-1194.
The registration of computed tomography (CT) and cone-beam computed tomography (CBCT) plays a key role in image-guided radiotherapy (IGRT). However, the large intensity variation between CT and CBCT images limits the registration performance and its clinical application in IGRT. In this study, a learning-based unsupervised approach was developed to address this issue and accurately register CT and CBCT images by predicting the deformation field.
A dual attention module was used to handle the large intensity variation between CT and CBCT images. Specifically, a scale-aware position attention block (SP-BLOCK) and a scale-aware channel attention block (SC-BLOCK) were employed to integrate contextual information from the image space and channel dimensions. The SP-BLOCK enhances the correlation of similar features by weighting and aggregating multi-scale features at different positions, while the SC-BLOCK handles the multiple features of all channels to selectively emphasize dependencies between channel maps.
The proposed method was compared with existing mainstream methods on the 4D-LUNG data set. Compared to other mainstream methods, it achieved the highest structural similarity (SSIM) and dice similarity coefficient (DICE) scores of 86.34% and 89.74%, respectively, and the lowest target registration error (TRE) of 2.07 mm.
The proposed method can register CT and CBCT images with high accuracy without the needs of manual labeling. It provides an effective way for high-accuracy patient positioning and target localization in IGRT.
计算机断层扫描(CT)与锥形束计算机断层扫描(CBCT)的配准在图像引导放射治疗(IGRT)中起着关键作用。然而,CT图像与CBCT图像之间的强度差异较大,限制了配准性能及其在IGRT中的临床应用。在本研究中,开发了一种基于学习的无监督方法来解决这一问题,并通过预测变形场来精确配准CT和CBCT图像。
使用双注意力模块来处理CT和CBCT图像之间的强度差异。具体而言,采用尺度感知位置注意力块(SP-BLOCK)和尺度感知通道注意力块(SC-BLOCK)来整合来自图像空间和通道维度的上下文信息。SP-BLOCK通过对不同位置的多尺度特征进行加权和聚合来增强相似特征的相关性,而SC-BLOCK处理所有通道的多个特征,以选择性地强调通道图之间的依赖性。
在4D-LUNG数据集上,将所提出的方法与现有的主流方法进行了比较。与其他主流方法相比,该方法分别获得了最高的结构相似性(SSIM)和骰子相似系数(DICE)分数,分别为86.34%和89.74%,以及最低的目标配准误差(TRE),为2.07毫米。
所提出的方法无需手动标记即可高精度地配准CT和CBCT图像。它为IGRT中的高精度患者定位和靶区定位提供了一种有效方法。