Digital Manufacturing Equipment and Technology Key National Laboratories, Huazhong University of Science and Technology, Wuhan 430074, China.
Huagong Manufacturing Equipment Digital National Engineering Center Co., Ltd., Wuhan 430074, China.
Sensors (Basel). 2024 Aug 22;24(16):5447. doi: 10.3390/s24165447.
Accurate and precise rigid registration between head-neck computed tomography (CT) and cone-beam computed tomography (CBCT) images is crucial for correcting setup errors in image-guided radiotherapy (IGRT) for head and neck tumors. However, conventional registration methods that treat the head and neck as a single entity may not achieve the necessary accuracy for the head region, which is particularly sensitive to radiation in radiotherapy. We propose ACSwinNet, a deep learning-based method for head-neck CT-CBCT rigid registration, which aims to enhance the registration precision in the head region. Our approach integrates an anatomical constraint encoder with anatomical segmentations of tissues and organs to enhance the accuracy of rigid registration in the head region. We also employ a Swin Transformer-based network for registration in cases with large initial misalignment and a perceptual similarity metric network to address intensity discrepancies and artifacts between the CT and CBCT images. We validate the proposed method using a head-neck CT-CBCT dataset acquired from clinical patients. Compared with the conventional rigid method, our method exhibits lower target registration error (TRE) for landmarks in the head region (reduced from 2.14 ± 0.45 mm to 1.82 ± 0.39 mm), higher dice similarity coefficient (DSC) (increased from 0.743 ± 0.051 to 0.755 ± 0.053), and higher structural similarity index (increased from 0.854 ± 0.044 to 0.870 ± 0.043). Our proposed method effectively addresses the challenge of low registration accuracy in the head region, which has been a limitation of conventional methods. This demonstrates significant potential in improving the accuracy of IGRT for head and neck tumors.
准确而精确的头颈部 CT(计算机断层扫描)和锥形束 CT(CBCT)图像刚性配准对于纠正头颈部肿瘤图像引导放射治疗(IGRT)中的摆位误差至关重要。然而,将头颈部视为单一实体的传统配准方法可能无法达到头部区域所需的精度,因为头部在放射治疗中对辐射特别敏感。我们提出了一种基于深度学习的头颈部 CT-CBCT 刚性配准方法 ACSwinNet,旨在提高头部区域的配准精度。我们的方法将解剖结构约束编码器与组织和器官的解剖分割相结合,以提高头部区域的刚性配准精度。我们还采用基于 Swin Transformer 的网络来处理初始较大配准偏差的情况,并采用感知相似性度量网络来解决 CT 和 CBCT 图像之间的强度差异和伪影。我们使用从临床患者中获取的头颈部 CT-CBCT 数据集验证了所提出的方法。与传统的刚性方法相比,我们的方法对头颈部标志点的目标配准误差(TRE)更低(从 2.14 ± 0.45mm 降低至 1.82 ± 0.39mm),更高的骰子相似系数(DSC)(从 0.743 ± 0.051 增加到 0.755 ± 0.053),更高的结构相似性指数(从 0.854 ± 0.044 增加到 0.870 ± 0.043)。我们的方法有效地解决了传统方法中头部区域配准精度低的挑战。这表明在提高头颈部肿瘤 IGRT 的准确性方面具有很大的潜力。