College of Artificial Intelligence, Nankai University, Tianjin, China.
Ophthalmology, Tianjin Huanhu Hospital, Tianjin, China.
Comput Methods Programs Biomed. 2022 Jun;219:106739. doi: 10.1016/j.cmpb.2022.106739. Epub 2022 Mar 11.
Early fundus screening and timely treatment of ophthalmology diseases can effectively prevent blindness. Previous studies just focus on fundus images of single eye without utilizing the useful relevant information of the left and right eyes. While clinical ophthalmologists usually use binocular fundus images to help ocular disease diagnosis. Besides, previous works usually target only one ocular diseases at a time. Considering the importance of patient-level bilateral eye diagnosis and multi-label ophthalmic diseases classification, we propose a bilateral feature enhancement network (BFENet) to address the above two problems.
We propose a two-stream interactive CNN architecture for multi-label ophthalmic diseases classification with bilateral fundus images. Firstly, we design a feature enhancement module, which makes use of the interaction between bilateral fundus images to strengthen the extracted feature information. Specifically, attention mechanism is used to learn the interdependence between local and global information in the designed interactive architecture for two-stream, which leads to the reweighting of these features, and recover more details. In order to capture more disease characteristics, we further design a novel multiscale module, which enriches the feature maps by superimposing feature information of different resolutions images extracted through dilated convolution.
In the off-site set, the Kappa, F, AUC and Final score are 0.535, 0.892, 0.912 and 0.780, respectively. In the on-site set, the Kappa, F, AUC and Final score are 0.513, 0.886, 0.903 and 0.767 respectively. Comparing with existing methods, BFENet achieves the best classification performance.
Comprehensive experiments are conducted to demonstrate the effectiveness of this proposed model. Besides, our method can be extended to similar tasks where the correlation between different images is important.
早期眼底筛查和及时的眼科疾病治疗可以有效预防失明。以前的研究仅仅关注单眼的眼底图像,而没有利用左右眼的有用相关信息。而临床眼科医生通常使用双眼眼底图像来帮助眼部疾病诊断。此外,以前的工作通常每次只针对一种眼部疾病。考虑到患者级别的双眼诊断和多标签眼科疾病分类的重要性,我们提出了一种双边特征增强网络(BFENet)来解决上述两个问题。
我们提出了一种用于双眼眼底图像的多标签眼科疾病分类的双流交互 CNN 架构。首先,我们设计了一个特征增强模块,利用双眼眼底图像之间的交互作用来增强提取的特征信息。具体来说,使用注意力机制学习双流设计的交互结构中局部和全局信息之间的相互依存关系,从而对这些特征进行重新加权,以恢复更多细节。为了捕获更多的疾病特征,我们进一步设计了一种新颖的多尺度模块,通过叠加通过扩张卷积提取的不同分辨率图像的特征信息来丰富特征图。
在外部数据集上,Kappa、F、AUC 和 Final 分数分别为 0.535、0.892、0.912 和 0.780。在现场数据集上,Kappa、F、AUC 和 Final 分数分别为 0.513、0.886、0.903 和 0.767。与现有方法相比,BFENet 实现了最佳的分类性能。
进行了综合实验以证明该模型的有效性。此外,我们的方法可以扩展到需要不同图像之间相关性的类似任务中。