Hamzehei Sahand, Bai Jun, Raimondi Gianna, Tripp Rebecca, Ostroff Linnaea, Nabavi Sheida
University of Connecticut, Department of Computer Science & Engineering, Storrs, Connecticut, USA.
University of Connecticut, Department of Physiology & Neurobiology, Storrs, Connecticut, USA.
ACM BCB. 2023 Sep;2023. doi: 10.1145/3584371.3612965. Epub 2023 Oct 4.
In various applications, such as computer vision, medical imaging and robotics, three-dimensional (3D) image registration is a significant step. It enables the alignment of various datasets into a single coordinate system, consequently providing a consistent perspective that allows further analysis. By precisely aligning images we can compare, analyze, and combine data collected in different situations. This paper presents a novel approach for 3D or z-stack microscopy and medical image registration, utilizing a combination of conventional and deep learning techniques for feature extraction and adaptive likelihood-based methods for outlier detection. The proposed method uses the Scale-invariant Feature Transform (SIFT) and the Residual Network (ResNet50) deep neural learning network to extract effective features and obtain precise and exhaustive representations of image contents. The registration approach also employs the adaptive Maximum Likelihood Estimation SAmple Consensus (MLESAC) method that optimizes outlier detection and increases noise and distortion resistance to improve the efficacy of these combined extracted features. This integrated approach demonstrates robustness, flexibility, and adaptability across a variety of imaging modalities, enabling the registration of complex images with higher precision. Experimental results show that the proposed algorithm outperforms state-of-the-art image registration methods, including conventional SIFT, SIFT with Random Sample Consensus (RANSAC), and Oriented FAST and Rotated BRIEF (ORB) methods, as well as registration software packages such as bUnwrapJ and TurboReg, in terms of Mutual Information (MI), Phase Congruency-Based (PCB) metrics, and Gradiant-based metrics (GBM), using 3D MRI and 3D serial sections of multiplex microscopy images.
在各种应用中,如计算机视觉、医学成像和机器人技术,三维(3D)图像配准是重要的一步。它能将各种数据集对齐到单个坐标系中,从而提供一个一致的视角以便进行进一步分析。通过精确对齐图像,我们可以比较、分析和组合在不同情况下收集的数据。本文提出了一种用于3D或z-stack显微镜及医学图像配准的新方法,该方法结合了传统技术和深度学习技术进行特征提取,并采用基于自适应似然的方法进行异常值检测。所提出的方法使用尺度不变特征变换(SIFT)和残差网络(ResNet50)深度神经网络来提取有效特征,并获得图像内容的精确且详尽的表示。配准方法还采用了自适应最大似然估计样本一致性(MLESAC)方法,该方法优化了异常值检测,并提高了抗噪声和抗失真能力,以提高这些组合提取特征的有效性。这种集成方法在各种成像模态中都表现出了鲁棒性、灵活性和适应性,能够以更高的精度对复杂图像进行配准。实验结果表明,在所使用的三维磁共振成像(3D MRI)和多重显微镜图像的三维连续切片上,就互信息(MI)、基于相位一致性(PCB)的指标以及基于梯度(GBM)的指标而言,所提出的算法优于包括传统SIFT、带随机样本一致性(RANSAC)的SIFT、定向FAST和旋转BRIEF(ORB)方法以及诸如bUnwrapJ和TurboReg等配准软件包在内的现有图像配准方法。