Miri Mohammad Saleh, Abràmoff Michael D, Kwon Young H, Garvin Mona K
Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA.
Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA; Department of Ophthalmology and Visual Sciences, The University of Iowa, Iowa City, IA 52242, USA; Iowa City VA Health Care System, Iowa City, IA 52246, USA.
Biomed Opt Express. 2016 Nov 23;7(12):5252-5267. doi: 10.1364/BOE.7.005252. eCollection 2016 Dec 1.
With availability of different retinal imaging modalities such as fundus photography and spectral domain optical coherence tomography (SD-OCT), having a robust and accurate registration scheme to enable utilization of this complementary information is beneficial. The few existing fundus-OCT registration approaches contain a vessel segmentation step, as the retinal blood vessels are the most dominant structures that are in common between the pair of images. However, errors in the vessel segmentation from either modality may cause corresponding errors in the registration. In this paper, we propose a feature-based registration method for registering fundus photographs and SD-OCT projection images that benefits from vasculature structural information without requiring blood vessel segmentation. In particular, after a preprocessing step, a set of control points (CPs) are identified by looking for the corners in the images. Next, each CP is represented by a feature vector which encodes the local structural information via computing the histograms of oriented gradients (HOG) from the neighborhood of each CP. The best matching CPs are identified by calculating the distance of their corresponding feature vectors. After removing the incorrect matches the best affine transform that registers fundus photographs to SD-OCT projection images is computed using the random sample consensus (RANSAC) method. The proposed method was tested on 44 pairs of fundus and SD-OCT projection images of glaucoma patients and the result showed that the proposed method successfully registers the multimodal images and produced a registration error of 25.34 ± 12.34 m (0.84 ± 0.41 pixels).
随着眼底摄影和光谱域光学相干断层扫描(SD-OCT)等不同视网膜成像方式的出现,拥有一种强大且准确的配准方案以利用这些互补信息是有益的。现有的少数眼底-OCT配准方法包含血管分割步骤,因为视网膜血管是这对图像中最主要的共同结构。然而,来自任何一种成像方式的血管分割误差都可能导致配准中的相应误差。在本文中,我们提出了一种基于特征的配准方法,用于配准眼底照片和SD-OCT投影图像,该方法受益于血管结构信息而无需进行血管分割。具体而言,在预处理步骤之后,通过寻找图像中的角点来识别一组控制点(CPs)。接下来,每个CP由一个特征向量表示,该特征向量通过计算每个CP邻域的定向梯度直方图(HOG)来编码局部结构信息。通过计算其相应特征向量的距离来识别最佳匹配的CPs。在去除不正确的匹配之后,使用随机样本一致性(RANSAC)方法计算将眼底照片配准到SD-OCT投影图像的最佳仿射变换。该方法在44对青光眼患者的眼底和SD-OCT投影图像上进行了测试,结果表明该方法成功配准了多模态图像,并产生了25.34±12.34μm(0.84±0.41像素)的配准误差。