IEEE Trans Cybern. 2022 Nov;52(11):11407-11417. doi: 10.1109/TCYB.2021.3062638. Epub 2022 Oct 17.
Diabetic retinopathy (DR) grading from fundus images has attracted increasing interest in both academic and industrial communities. Most convolutional neural network-based algorithms treat DR grading as a classification task via image-level annotations. However, these algorithms have not fully explored the valuable information in the DR-related lesions. In this article, we present a robust framework, which collaboratively utilizes patch-level and image-level annotations, for DR severity grading. By an end-to-end optimization, this framework can bidirectionally exchange the fine-grained lesion and image-level grade information. As a result, it exploits more discriminative features for DR grading. The proposed framework shows better performance than the recent state-of-the-art algorithms and three clinical ophthalmologists with over nine years of experience. By testing on datasets of different distributions (such as label and camera), we prove that our algorithm is robust when facing image quality and distribution variations that commonly exist in real-world practice. We inspect the proposed framework through extensive ablation studies to indicate the effectiveness and necessity of each motivation. The code and some valuable annotations are now publicly available.
糖尿病性视网膜病变(DR)的眼底图像分级在学术界和工业界都引起了越来越多的关注。大多数基于卷积神经网络的算法通过图像级别的标注将 DR 分级视为分类任务。然而,这些算法并没有充分挖掘 DR 相关病变中的有价值信息。在本文中,我们提出了一个稳健的框架,该框架协同利用了斑块级和图像级别的标注,用于 DR 严重程度分级。通过端到端的优化,该框架可以在精细的病变和图像级别之间双向交换信息。结果,它可以挖掘出更具判别力的特征用于 DR 分级。与最近的最先进算法和三位具有九年以上经验的临床眼科医生相比,所提出的框架显示出了更好的性能。通过在不同分布(如标签和相机)的数据集上进行测试,我们证明了我们的算法在面对现实实践中常见的图像质量和分布变化时具有鲁棒性。我们通过广泛的消融研究来检查所提出的框架,以表明每个动机的有效性和必要性。现在可以公开访问代码和一些有价值的标注。