Meyer C R, Boes J L, Kim B, Bland P H, Zasadny K R, Kison P V, Koral K, Frey K A, Wahl R L
Department of Radiology, University of Michigan Medical School, Ann Arbor 48109, USA.
Med Image Anal. 1997 Apr;1(3):195-206. doi: 10.1016/s1361-8415(97)85010-4.
This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.
本文应用并评估了一种基于互信息的自动配准算法,该算法适用于广泛的多模态体数据集。该算法几乎不需要或无需预处理,用户输入极少,并且易于实现仿射变换(即线性变换)或薄板样条(TPS)扭曲配准。我们在体模研究以及其他算法很少能(如果有的话)表现得同样出色的选定案例中评估了该算法,以证明这种新方法的价值。通过迭代更改配准参数以最大化互信息,对多模态灰度体数据集对进行配准。在使用PET/CT的胸部体模配准以及使用MRI T2加权/T1加权采集的美国国立医学图书馆的可视人体数据集配准中评估了定量配准误差。展示了不同临床数据集的配准,包括具有大量缺失数据的PET/MRI脑部扫描的旋转平移映射、胸部PET/CT的完全仿射映射以及腹部SPECT/CT的旋转平移映射。还展示了胸部PET/CT的五点薄板样条(TPS)扭曲配准。仿射临床配准的配准算法收敛时间在3.5至31分钟之间,TPS扭曲配准的收敛时间为57分钟。在体模中,旋转平移配准的平均误差向量长度经测量为亚体素级。更重要的是,即使存在缺失数据,旋转平移算法也表现良好。所展示的临床融合在各个层面上质量都非常出色。我们得出结论,这种自动、快速、稳健的算法显著增加了在不久的将来多模态配准将被常规用于辅助临床诊断和治疗后评估的可能性。