Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China.
Med Phys. 2023 Oct;50(10):6243-6258. doi: 10.1002/mp.16396. Epub 2023 Apr 6.
The fusion of computed tomography (CT) and ultrasound (US) image can enhance lesion detection ability and improve the success rate of liver interventional radiology. The image-based fusion methods encounter the challenge of registration initialization due to the random scanning pose and limited field of view of US. Existing automatic methods those used vessel geometric information and intensity-based metric are sensitive to parameters and have low success rate. The learning-based methods require a large number of registered datasets for training.
The aim of this study is to provide a fully automatic and robust US-3D CT registration method without registered training data and user-specified parameters assisted by the revolutionary deep learning-based segmentation, which can further be used for preparing training samples for the study of learning-based methods.
We propose a fully automatic CT-3D US registration method by two improved registration metrics. We propose to use 3D U-Net-based multi-organ segmentation of US and CT to assist the conventional registration. The rigid transform is searched in the space of any paired vessel bifurcation planes where the best transform is decided by a segmentation overlap metric, which is more related to the segmentation precision than Dice coefficient. In nonrigid registration phase, we propose a hybrid context and edge based image similarity metric with a simple mask that can remove most noisy US voxels to guide the B-spline transform registration. We evaluate our method on 42 paired CT-3D US datasets scanned with two different US devices from two hospitals. We compared our methods with other exsiting methods with both quantitative measures of target registration error (TRE) and the Jacobian determinent with paired t-test and qualitative registration imaging results.
The results show that our method achieves fully automatic rigid registration TRE of 4.895 mm, deformable registration TRE of 2.995 mm in average, which outperforms state-of-the-art automatic linear methods and nonlinear registration metrics with paired t-test's p value less than 0.05. The proposed overlap metric achieves better results than self similarity description (SSD), edge matching (EM), and block matching (BM) with p values of 1.624E-10, 4.235E-9, and 0.002, respectively. The proposed hybrid edge and context-based metric outperforms context-only, edge-only, and intensity statistics-only-based metrics with p values of 0.023, 3.81E-5, and 1.38E-15, respectively. The 3D US segmentation has achieved mean Dice similarity coefficient (DSC) of 0.799, 0.724, 0.788, and precision of 0.871, 0.769, 0.862 for gallbladder, vessel, and branch vessel, respectively.
The deep learning-based US segmentation can achieve satisfied result to assist robust conventional rigid registration. The Dice similarity coefficient-based metrics, hybrid context, and edge image similarity metric contribute to robust and accurate registration.
将计算机断层扫描(CT)和超声(US)图像融合可以提高病变检测能力,提高肝脏介入放射学的成功率。基于图像的融合方法由于 US 的随机扫描姿势和有限的视场而面临配准初始化的挑战。现有的自动方法利用血管几何信息和基于强度的度量,对参数敏感,成功率低。基于学习的方法需要大量注册数据集进行训练。
本研究旨在提供一种全自动、鲁棒的 US-3D CT 注册方法,无需注册训练数据和用户指定的参数,由革命性的基于深度学习的分割辅助,这可以进一步用于为学习方法的研究准备训练样本。
我们提出了一种通过两个改进的配准度量的全自动 CT-3D US 配准方法。我们建议使用基于 3D U-Net 的 US 和 CT 的多器官分割来辅助常规配准。刚性变换在任意配对的血管分叉面空间中搜索,最佳变换由分割重叠度量决定,该度量与分割精度比骰子系数更相关。在非刚性配准阶段,我们提出了一种基于混合上下文和边缘的图像相似性度量,并使用一个简单的掩码,该掩码可以去除大多数嘈杂的 US 体素,以引导 B-样条变换配准。我们在来自两家医院的两台不同 US 设备扫描的 42 对 CT-3D US 数据集上评估了我们的方法。我们将我们的方法与其他现有的方法进行了比较,包括目标配准误差(TRE)的定量测量和具有配对 t 检验的雅可比行列式以及定性的配准成像结果。
结果表明,我们的方法在刚性配准 TRE 方面达到了 4.895mm,在平均变形配准 TRE 方面达到了 2.995mm,优于具有配对 t 检验 p 值小于 0.05 的最先进的自动线性方法和非线性配准度量。所提出的重叠度量在与自我相似性描述(SSD)、边缘匹配(EM)和块匹配(BM)的比较中取得了更好的结果,p 值分别为 1.624E-10、4.235E-9 和 0.002。所提出的混合边缘和上下文基于度量在与仅上下文、仅边缘和仅强度统计基于度量的比较中表现更好,p 值分别为 0.023、3.81E-5 和 1.38E-15。3D US 分割在胆囊、血管和分支血管方面分别达到了 0.799、0.724 和 0.788 的平均骰子相似系数(DSC)和 0.871、0.769 和 0.862 的精度。
基于深度学习的 US 分割可以达到令人满意的结果,从而辅助稳健的常规刚性配准。基于骰子相似系数的度量、混合上下文和边缘图像相似性度量有助于实现稳健和准确的配准。