Department of Biomedical Engineering and the Center for Biotechnology and Interdisciplinary Studies at Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
Shenendehowa High School, Clifton Park, NY 12065, USA.
Comput Med Imaging Graph. 2020 Sep;84:101769. doi: 10.1016/j.compmedimag.2020.101769. Epub 2020 Jul 31.
Artificial intelligence, especially the deep learning paradigm, has posed a considerable impact on cancer imaging and interpretation. For instance, fusing transrectal ultrasound (TRUS) and magnetic resonance (MR) images to guide prostate cancer biopsy can significantly improve the diagnosis. However, multi-modal image registration is still challenging, even with the latest deep learning technology, as it requires large amounts of labeled transformations for network training. This paper aims to address this problem from two angles: (i) a new method of generating large amount of transformations following a targeted distribution to improve the network training and (ii) a coarse-to-fine multi-stage method to gradually map the distribution from source to target. We evaluate both innovations based on a multi-modal prostate image registration task, where a T2-weighted MR volume and a reconstructed 3D ultrasound volume are to be aligned. Our results demonstrate that the use of data generation can significantly reduce the registration error by up to 62%. Moreover, the multi-stage coarse-to-fine registration technique results in a mean surface registration error (SRE) of 3.66 mm (with the initial mean SRE of 9.42 mm), which is found to be significantly better than the one-step registration with a mean SRE of 4.08 mm.
人工智能,特别是深度学习范式,对癌症成像和解释产生了重大影响。例如,融合经直肠超声 (TRUS) 和磁共振 (MR) 图像来指导前列腺癌活检可以显著提高诊断的准确性。然而,即使有最新的深度学习技术,多模态图像配准仍然具有挑战性,因为它需要大量的有标签的变换来进行网络训练。本文旨在从两个角度解决这个问题:(i)生成大量遵循目标分布的变换的新方法,以改进网络训练;(ii)从粗到精的多阶段方法,逐步将分布从源映射到目标。我们基于多模态前列腺图像配准任务来评估这两个创新,其中需要对齐 T2 加权 MR 体积和重建的 3D 超声体积。我们的结果表明,使用数据生成可以将配准误差显著降低多达 62%。此外,粗到精的多阶段配准技术的平均表面配准误差(SRE)为 3.66 毫米(初始平均 SRE 为 9.42 毫米),明显优于平均 SRE 为 4.08 毫米的一步配准。