IEEE Trans Med Imaging. 2023 Mar;42(3):823-833. doi: 10.1109/TMI.2022.3218147. Epub 2023 Mar 2.
We present a meta-learning framework for interactive medical image registration. Our proposed framework comprises three components: a learning-based medical image registration algorithm, a form of user interaction that refines registration at inference, and a meta-learning protocol that learns a rapidly adaptable network initialization. This paper describes a specific algorithm that implements the registration, interaction and meta-learning protocol for our exemplar clinical application: registration of magnetic resonance (MR) imaging to interactively acquired, sparsely-sampled transrectal ultrasound (TRUS) images. Our approach obtains comparable registration error (4.26 mm) to the best-performing non-interactive learning-based 3D-to-3D method (3.97 mm) while requiring only a fraction of the data, and occurring in real-time during acquisition. Applying sparsely sampled data to non-interactive methods yields higher registration errors (6.26 mm), demonstrating the effectiveness of interactive MR-TRUS registration, which may be applied intraoperatively given the real-time nature of the adaptation process.
我们提出了一种用于交互式医学图像配准的元学习框架。我们的框架由三个组件组成:基于学习的医学图像配准算法、在推理时细化配准的用户交互形式,以及学习快速自适应网络初始化的元学习协议。本文描述了一种特定的算法,该算法实现了我们示例临床应用的注册、交互和元学习协议:将磁共振(MR)成像与交互式获取的稀疏采样经直肠超声(TRUS)图像进行配准。我们的方法获得了与表现最佳的非交互式基于学习的 3D 到 3D 方法(3.97mm)相当的配准误差(4.26mm),同时只需要一小部分数据,并且在采集过程中实时进行。将稀疏采样数据应用于非交互式方法会产生更高的配准误差(6.26mm),这表明交互式 MR-TRUS 配准的有效性,鉴于适应过程的实时性质,它可以在手术期间应用。