• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

结合卷积神经网络和自注意力机制进行眼底疾病识别。

Combining convolutional neural networks and self-attention for fundus diseases identification.

机构信息

School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China.

College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China.

出版信息

Sci Rep. 2023 Jan 2;13(1):76. doi: 10.1038/s41598-022-27358-6.

DOI:10.1038/s41598-022-27358-6
PMID:36593268
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9807560/
Abstract

Early detection of lesions is of great significance for treating fundus diseases. Fundus photography is an effective and convenient screening technique by which common fundus diseases can be detected. In this study, we use color fundus images to distinguish among multiple fundus diseases. Existing research on fundus disease classification has achieved some success through deep learning techniques, but there is still much room for improvement in model evaluation metrics using only deep convolutional neural network (CNN) architectures with limited global modeling ability; the simultaneous diagnosis of multiple fundus diseases still faces great challenges. Therefore, given that the self-attention (SA) model with a global receptive field may have robust global-level feature modeling ability, we propose a multistage fundus image classification model MBSaNet which combines CNN and SA mechanism. The convolution block extracts the local information of the fundus image, and the SA module further captures the complex relationships between different spatial positions, thereby directly detecting one or more fundus diseases in retinal fundus image. In the initial stage of feature extraction, we propose a multiscale feature fusion stem, which uses convolutional kernels of different scales to extract low-level features of the input image and fuse them to improve recognition accuracy. The training and testing were performed based on the ODIR-5k dataset. The experimental results show that MBSaNet achieves state-of-the-art performance with fewer parameters. The wide range of diseases and different fundus image collection conditions confirmed the applicability of MBSaNet.

摘要

早期发现病变对于治疗眼底疾病具有重要意义。眼底摄影是一种有效且方便的筛查技术,可用于检测常见的眼底疾病。在本研究中,我们使用彩色眼底图像来区分多种眼底疾病。现有的眼底疾病分类研究通过深度学习技术已经取得了一些成功,但仅使用具有有限全局建模能力的深度卷积神经网络(CNN)架构,在模型评估指标方面仍有很大的改进空间;同时诊断多种眼底疾病仍然面临巨大挑战。因此,鉴于具有全局感受野的自注意力(SA)模型可能具有强大的全局特征建模能力,我们提出了一种结合 CNN 和 SA 机制的多阶段眼底图像分类模型 MBSaNet。卷积块提取眼底图像的局部信息,SA 模块进一步捕捉不同空间位置之间的复杂关系,从而直接检测视网膜眼底图像中的一种或多种眼底疾病。在特征提取的初始阶段,我们提出了一种多尺度特征融合主干,该主干使用不同尺度的卷积核提取输入图像的底层特征,并将它们融合以提高识别准确性。在 ODIR-5k 数据集上进行了训练和测试。实验结果表明,MBSaNet 在具有较少参数的情况下实现了最先进的性能。广泛的疾病和不同的眼底图像采集条件证实了 MBSaNet 的适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/bc25c982925c/41598_2022_27358_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/9d291434e104/41598_2022_27358_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/17e606d4b3cd/41598_2022_27358_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/f7925af3dbe1/41598_2022_27358_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/b1b4e130119a/41598_2022_27358_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/32606609ed65/41598_2022_27358_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/1dcc91040682/41598_2022_27358_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/591b0eb4449c/41598_2022_27358_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/bc25c982925c/41598_2022_27358_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/9d291434e104/41598_2022_27358_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/17e606d4b3cd/41598_2022_27358_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/f7925af3dbe1/41598_2022_27358_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/b1b4e130119a/41598_2022_27358_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/32606609ed65/41598_2022_27358_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/1dcc91040682/41598_2022_27358_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/591b0eb4449c/41598_2022_27358_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/31b9/9807560/bc25c982925c/41598_2022_27358_Fig8_HTML.jpg

相似文献

1
Combining convolutional neural networks and self-attention for fundus diseases identification.结合卷积神经网络和自注意力机制进行眼底疾病识别。
Sci Rep. 2023 Jan 2;13(1):76. doi: 10.1038/s41598-022-27358-6.
2
Multi-label classification of retinal diseases based on fundus images using Resnet and Transformer.基于眼底图像的 Resnet 和 Transformer 的视网膜疾病多标签分类。
Med Biol Eng Comput. 2024 Nov;62(11):3459-3469. doi: 10.1007/s11517-024-03144-6. Epub 2024 Jun 14.
3
A deep learning framework for the early detection of multi-retinal diseases.用于多视网膜疾病早期检测的深度学习框架。
PLoS One. 2024 Jul 25;19(7):e0307317. doi: 10.1371/journal.pone.0307317. eCollection 2024.
4
A novel retinal vessel detection approach based on multiple deep convolution neural networks.基于多个深度卷积神经网络的新型视网膜血管检测方法。
Comput Methods Programs Biomed. 2018 Dec;167:43-48. doi: 10.1016/j.cmpb.2018.10.021. Epub 2018 Oct 30.
5
BFENet: A two-stream interaction CNN method for multi-label ophthalmic diseases classification with bilateral fundus images.BFENet:一种用于双侧眼底图像的多标签眼科疾病分类的双流交互 CNN 方法。
Comput Methods Programs Biomed. 2022 Jun;219:106739. doi: 10.1016/j.cmpb.2022.106739. Epub 2022 Mar 11.
6
Investigation of the Role of Convolutional Neural Network Architectures in the Diagnosis of Glaucoma using Color Fundus Photography.卷积神经网络结构在基于彩色眼底照相的青光眼诊断中的作用研究。
Turk J Ophthalmol. 2022 Jun 29;52(3):193-200. doi: 10.4274/tjo.galenos.2021.29726.
7
Deep Ensemble Learning Based Objective Grading of Macular Edema by Extracting Clinically Significant Findings from Fused Retinal Imaging Modalities.基于深度集成学习的融合视网膜成像模态中临床显著发现提取的黄斑水肿客观分级。
Sensors (Basel). 2019 Jul 5;19(13):2970. doi: 10.3390/s19132970.
8
Multi-label classification of fundus images with graph convolutional network and LightGBM.基于图卷积网络和 LightGBM 的眼底图像多标签分类。
Comput Biol Med. 2022 Oct;149:105909. doi: 10.1016/j.compbiomed.2022.105909. Epub 2022 Aug 11.
9
MTPA_Unet: Multi-Scale Transformer-Position Attention Retinal Vessel Segmentation Network Joint Transformer and CNN.MTPA_Unet:多尺度Transformer-位置注意力视网膜血管分割网络联合 Transformer 和 CNN。
Sensors (Basel). 2022 Jun 17;22(12):4592. doi: 10.3390/s22124592.
10
Automated fundus ultrasound image classification based on siamese convolutional neural networks with multi-attention.基于具有多注意力机制的孪生卷积神经网络的眼底超声图像自动分类。
BMC Med Imaging. 2023 Jul 6;23(1):89. doi: 10.1186/s12880-023-01047-w.

引用本文的文献

1
Hybrid attention-based deep learning for multi-label ophthalmic disease detection on fundus images.基于混合注意力的深度学习用于眼底图像多标签眼科疾病检测
Graefes Arch Clin Exp Ophthalmol. 2025 May 29. doi: 10.1007/s00417-025-06858-x.
2
Artificial intelligence technology in ophthalmology public health: current applications and future directions.眼科公共卫生中的人工智能技术:当前应用与未来方向。
Front Cell Dev Biol. 2025 Apr 17;13:1576465. doi: 10.3389/fcell.2025.1576465. eCollection 2025.
3
Detecting Glaucoma in Highly Myopic Eyes From Fundus Photographs Using Deep Convolutional Neural Networks.
使用深度卷积神经网络从眼底照片中检测高度近视眼中的青光眼
Clin Exp Ophthalmol. 2025 Jul;53(5):502-515. doi: 10.1111/ceo.14498. Epub 2025 Feb 9.
4
SMMF: a self-attention-based multi-parametric MRI feature fusion framework for the diagnosis of bladder cancer grading.SMMF:一种基于自注意力机制的多参数磁共振成像特征融合框架,用于膀胱癌分级诊断。
Front Oncol. 2024 Mar 7;14:1337186. doi: 10.3389/fonc.2024.1337186. eCollection 2024.
5
Classification of Color Fundus Photographs Using Fusion Extracted Features and Customized CNN Models.基于融合提取特征和定制卷积神经网络模型的彩色眼底照片分类
Healthcare (Basel). 2023 Aug 7;11(15):2228. doi: 10.3390/healthcare11152228.