The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, People's Republic of China.
Center of Medical Physics, Nanjing Medical University, Changzhou 213003, People's Republic of China.
Phys Med Biol. 2021 Aug 23;66(17). doi: 10.1088/1361-6560/ac195e.
A long-standing problem in image-guided radiotherapy is that inferior intraoperative images present a difficult problem for automatic registration algorithms. Particularly for digital radiography (DR) and digitally reconstructed radiograph (DRR), the blurred, low-contrast, and noisy DR makes the multimodal registration of DR-DRR challenging. Therefore, we propose a novel CNN-based method called CrossModalNet to exploit the quality preoperative modality (DRR) for handling the limitations of intraoperative images (DR), thereby improving the registration accuracy. The method consists of two parts: DR-DRR contour predictions and contour-based rigid registration. We have designed the CrossModal Attention Module and CrossModal Refine Module to fully exploit the multiscale crossmodal features and implement the crossmodal interactions during the feature encoding and decoding stages. Then, the predicted anatomical contours of DR-DRR are registered by the classic mutual information method. We collected 2486 patient scans to train CrossModalNet and 170 scans to test its performance. The results show that it outperforms the classic and state-of-the-art methods with 95th percentile Hausdorff distance of 5.82 pixels and registration accuracy of 81.2%. The code is available at https://github.com/lc82111/crossModalNet.
在图像引导放射治疗中,一个长期存在的问题是术中图像质量较差,这给自动配准算法带来了困难。特别是对于数字放射摄影术(DR)和数字重建放射摄影术(DRR),模糊、低对比度和噪声较大的 DR 使得 DR-DRR 的多模态配准具有挑战性。因此,我们提出了一种名为 CrossModalNet 的基于 CNN 的新方法,利用高质量的术前模态(DRR)来处理术中图像(DR)的局限性,从而提高配准精度。该方法包括两部分:DR-DRR 轮廓预测和基于轮廓的刚性配准。我们设计了 CrossModal Attention 模块和 CrossModal Refine 模块,以充分利用多尺度跨模态特征,并在特征编码和解码阶段实现跨模态交互。然后,通过经典的互信息方法对 DR-DRR 的预测解剖轮廓进行配准。我们收集了 2486 例患者扫描数据来训练 CrossModalNet,并使用 170 例扫描数据来测试其性能。结果表明,它优于经典方法和最先进的方法,第 95 百分位数的 Hausdorff 距离为 5.82 像素,配准准确率为 81.2%。代码可在 https://github.com/lc82111/crossModalNet 上获得。