Institute of Computing, University of Campinas, Campinas, Brazil.
IEEE Trans Biomed Eng. 2012 Aug;59(8):2244-53. doi: 10.1109/TBME.2012.2201717. Epub 2012 May 30.
In this paper, we present an algorithm to detect the presence of diabetic retinopathy (DR)-related lesions from fundus images based on a common analytical approach that is capable of identifying both red and bright lesions without requiring specific pre- or postprocessing. Our solution constructs a visual word dictionary representing points of interest (PoIs) located within regions marked by specialists that contain lesions associated with DR and classifies the fundus images based on the presence or absence of these PoIs as normal or DR-related pathology. The novelty of our approach is in locating DR lesions in the optic fundus images using visual words that combines feature information contained within the images in a framework easily extendible to different types of retinal lesions or pathologies and builds a specific projection space for each class of interest (e.g., white lesions such as exudates or normal regions) instead of a common dictionary for all classes. The visual words dictionary was applied to classifying bright and red lesions with classical cross validation and cross dataset validation to indicate the robustness of this approach. We obtained an area under the curve (AUC) of 95.3% for white lesion detection and an AUC of 93.3% for red lesion detection using fivefold cross validation and our own data consisting of 687 images of normal retinae, 245 images with bright lesions, 191 with red lesions, and 109 with signs of both bright and red lesions. For cross dataset analysis, the visual dictionary also achieves compelling results using our images as the training set and the RetiDB and Messidor images as test sets. In this case, the image classification resulted in an AUC of 88.1% when classifying the RetiDB dataset and in an AUC of 89.3% when classifying the Messidor dataset, both cases for bright lesion detection. The results indicate the potential for training with different acquisition images under different setup conditions with a high accuracy of referral based on the presence of either red or bright lesions or both. The robustness of the visual dictionary against image quality (blurring), resolution, and retinal background, makes it a strong candidate for DR screening of large, diverse communities with varying cameras and settings and levels of expertise for image capture.
在本文中,我们提出了一种基于常见分析方法的算法,用于从眼底图像中检测糖尿病性视网膜病变(DR)相关病变,该方法能够识别红色和明亮病变,而无需进行特定的预处理或后处理。我们的解决方案构建了一个视觉词词典,代表由专家标记的区域内的兴趣点(PoI),这些区域包含与 DR 相关的病变,并根据这些 PoI 的存在或不存在将眼底图像分类为正常或与 DR 相关的病理学。我们方法的新颖之处在于使用视觉词在眼底图像中定位 DR 病变,这些视觉词将图像中的特征信息组合在一个易于扩展到不同类型的视网膜病变或病理学的框架中,并为每个感兴趣的类别(例如,白色病变,如渗出物或正常区域)构建特定的投影空间,而不是为所有类别构建通用词典。视觉词词典应用于使用经典交叉验证和交叉数据集验证来分类明亮和红色病变,以指示该方法的稳健性。我们使用五重交叉验证和我们自己的包含 687 张正常视网膜图像、245 张明亮病变图像、191 张红色病变图像和 109 张明亮和红色病变图像的数据集获得了白色病变检测的曲线下面积(AUC)为 95.3%,红色病变检测的 AUC 为 93.3%。对于交叉数据集分析,当使用我们的图像作为训练集并且使用 RetiDB 和 Messidor 图像作为测试集时,视觉词典也取得了令人信服的结果。在这种情况下,当分类 RetiDB 数据集时,图像分类的 AUC 为 88.1%,当分类 Messidor 数据集时,AUC 为 89.3%,均用于明亮病变检测。结果表明,在不同设置条件下使用不同采集图像进行训练的潜力很高,并且基于红色或明亮病变或两者的存在,可以实现转诊的高准确性。视觉词典对图像质量(模糊)、分辨率和视网膜背景的稳健性使其成为具有不同相机、设置和图像捕获专业水平的大型、多样化社区进行 DR 筛查的有力候选者。