Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China.
Luoyang Institute of Science and Technology, Luoyang, China.
Med Phys. 2021 Nov;48(11):6901-6915. doi: 10.1002/mp.15214. Epub 2021 Sep 20.
This study aimed to design and evaluate a novel method for the registration of 2D lateral cephalograms and 3D craniofacial cone-beam computed tomography (CBCT) images, providing patient-specific 3D structures from a 2D lateral cephalogram without additional radiation exposure.
We developed a cross-modal deformable registration model based on a deep convolutional neural network. Our approach took advantage of a low-dimensional deformation field encoding and an iterative feedback scheme to infer coarse-to-fine volumetric deformations. In particular, we constructed a statistical subspace of deformation fields and parameterized the nonlinear mapping function from an image pair, consisting of the target 2D lateral cephalogram and the reference volumetric CBCT, to a latent encoding of the deformation field. Instead of the one-shot registration by the learned mapping function, a feedback scheme was introduced to progressively update the reference volumetric image and to infer coarse-to-fine deformations fields, accounting for the shape variations of anatomical structures. A total of 220 clinically obtained CBCTs were used to train and validate the proposed model, among which 120 CBCTs were used to generate a training dataset with 24k paired synthetic lateral cephalograms and CBCTs. The proposed approach was evaluated on the deformable 2D-3D registration of clinically obtained lateral cephalograms and CBCTs from growing and adult orthodontic patients.
Strong structural consistencies were observed between the deformed CBCT and the target lateral cephalogram in all criteria. The proposed method achieved state-of-the-art performances with the mean contour deviation of 0.41 0.12 mm on the anterior cranial base, 0.48 0.17 mm on the mandible, and 0.35 0.08 mm on the maxilla, respectively. The mean surface mesh ranged from 0.78 to 0.97 mm on various craniofacial structures, and the LREs ranged from 0.83 to 1.24 mm on the growing datasets regarding 14 landmarks. The proposed iterative feedback scheme handled the structural details and improved the registration. The resultant deformed volumetric image was consistent with the target lateral cephalogram in both 2D projective planes and 3D volumetric space regarding the multicategory craniofacial structures.
The results suggest that the deep learning-based 2D-3D registration model enables the deformable alignment of 2D lateral cephalograms and CBCTs and estimates patient-specific 3D craniofacial structures.
本研究旨在设计并评估一种新的方法,用于注册二维侧位头颅 X 光片和三维颅面锥形束 CT(CBCT)图像,以便在不增加额外辐射暴露的情况下,从二维侧位头颅 X 光片中获得患者特定的三维结构。
我们开发了一种基于深度卷积神经网络的跨模态可变形配准模型。我们的方法利用低维变形场编码和迭代反馈方案来推断从粗到细的体积变形。具体来说,我们构建了一个变形场的统计子空间,并对由目标二维侧位头颅 X 光片和参考容积 CBCT 组成的图像对到变形场的潜在编码的非线性映射函数进行参数化。我们不是通过学习的映射函数进行一次性配准,而是引入了一个反馈方案来逐步更新参考容积图像,并推断从粗到细的变形场,以考虑解剖结构的形状变化。总共使用 220 例临床获得的 CBCT 来训练和验证所提出的模型,其中 120 例 CBCT 用于生成具有 24k 对合成侧位头颅 X 光片和 CBCT 的训练数据集。我们评估了所提出的方法在生长和成人正畸患者的临床获得的侧位头颅 X 光片和 CBCT 的可变形 2D-3D 配准中的性能。
在所有标准中,变形的 CBCT 与目标侧位头颅 X 光片之间都观察到了很强的结构一致性。所提出的方法在颅前底的平均轮廓偏差为 0.41 0.12mm、下颌的平均轮廓偏差为 0.48 0.17mm、上颌的平均轮廓偏差为 0.35 0.08mm 等方面达到了最先进的性能。在各种颅面结构上,平均表面网格范围从 0.78 到 0.97mm,对于 14 个地标,生长数据集的 LRE 范围从 0.83 到 1.24mm。所提出的迭代反馈方案处理了结构细节并提高了配准效果。对于多类别颅面结构,生成的变形体积图像在二维投影平面和三维体积空间上都与目标侧位头颅 X 光片一致。
结果表明,基于深度学习的 2D-3D 配准模型能够实现二维侧位头颅 X 光片和 CBCT 的可变形对齐,并估计患者特定的三维颅面结构。