文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

A Lightweight CNN for Multiclass Retinal Disease Screening with Explainable AI.

作者信息

Arnob Arjun Kumar Bose, Chayon Muhammad Hasibur Rashid, Al Farid Fahmid, Husen Mohd Nizam, Ahmed Firoz

机构信息

Department of Computer Science, American International University-Bangladesh, Dhaka 1229, Bangladesh.

Faculty of Computer Science and Informatics, Berlin School of Business and Innovation, 12043 Berlin, Germany.

出版信息

J Imaging. 2025 Aug 15;11(8):275. doi: 10.3390/jimaging11080275.


DOI:10.3390/jimaging11080275
PMID:40863485
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12387214/
Abstract

Timely, balanced, and transparent detection of retinal diseases is essential to avert irreversible vision loss; however, current deep learning screeners are hampered by class imbalance, large models, and opaque reasoning. This paper presents a lightweight attention-augmented convolutional neural network (CNN) that addresses all three barriers. The network combines depthwise separable convolutions, squeeze-and-excitation, and global-context attention, and it incorporates gradient-based class activation mapping (Grad-CAM) and Grad-CAM++ to ensure that every decision is accompanied by pixel-level evidence. A 5335-image ten-class color-fundus dataset from Bangladeshi clinics, which was severely skewed (17-1509 images per class), was equalized using a synthetic minority oversampling technique (SMOTE) and task-specific augmentations. Images were resized to 150×150 px and split 70:15:15. The training used the adaptive moment estimation (Adam) optimizer (initial learning rate of 1×10-4, reduce-on-plateau, early stopping), ℓ2 regularization, and dual dropout. The 16.6 M parameter network converged in fewer than 50 epochs on a mid-range graphics processing unit (GPU) and reached 87.9% test accuracy, a macro-precision of 0.882, a macro-recall of 0.879, and a macro-F1-score of 0.880, reducing the error by 58% relative to the best ImageNet backbone (Inception-V3, 40.4% accuracy). Eight disorders recorded true-positive rates above 95%; macular scar and central serous chorioretinopathy attained F1-scores of 0.77 and 0.89, respectively. Saliency maps consistently highlighted optic disc margins, subretinal fluid, and other hallmarks. Targeted class re-balancing, lightweight attention, and integrated explainability, therefore, deliver accurate, transparent, and deployable retinal screening suitable for point-of-care ophthalmic triage on resource-limited hardware.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/e78bf40874aa/jimaging-11-00275-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/4c14c1240276/jimaging-11-00275-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/78ca82187efa/jimaging-11-00275-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/257c12d33f9a/jimaging-11-00275-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/f3d2cdbc2ce9/jimaging-11-00275-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/838973714011/jimaging-11-00275-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/38f64df538e4/jimaging-11-00275-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/9c58da4e2fa2/jimaging-11-00275-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/ec010f56e218/jimaging-11-00275-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/73468284754a/jimaging-11-00275-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/b2879cc7b3b1/jimaging-11-00275-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/bd4a2efc5541/jimaging-11-00275-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/0fe6182f676e/jimaging-11-00275-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/caeed5e55744/jimaging-11-00275-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/41372c013edd/jimaging-11-00275-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/a535e5acfaaa/jimaging-11-00275-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/5a1003cfa100/jimaging-11-00275-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/b9fa47bac0b5/jimaging-11-00275-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/a6c31023748c/jimaging-11-00275-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/e78bf40874aa/jimaging-11-00275-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/4c14c1240276/jimaging-11-00275-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/78ca82187efa/jimaging-11-00275-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/257c12d33f9a/jimaging-11-00275-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/f3d2cdbc2ce9/jimaging-11-00275-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/838973714011/jimaging-11-00275-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/38f64df538e4/jimaging-11-00275-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/9c58da4e2fa2/jimaging-11-00275-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/ec010f56e218/jimaging-11-00275-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/73468284754a/jimaging-11-00275-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/b2879cc7b3b1/jimaging-11-00275-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/bd4a2efc5541/jimaging-11-00275-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/0fe6182f676e/jimaging-11-00275-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/caeed5e55744/jimaging-11-00275-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/41372c013edd/jimaging-11-00275-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/a535e5acfaaa/jimaging-11-00275-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/5a1003cfa100/jimaging-11-00275-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/b9fa47bac0b5/jimaging-11-00275-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/a6c31023748c/jimaging-11-00275-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6295/12387214/e78bf40874aa/jimaging-11-00275-g019.jpg

相似文献

[1]
A Lightweight CNN for Multiclass Retinal Disease Screening with Explainable AI.

J Imaging. 2025-8-15

[2]
CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.

Sci Rep. 2025-8-31

[3]
Development and Validation of a Convolutional Neural Network Model to Predict a Pathologic Fracture in the Proximal Femur Using Abdomen and Pelvis CT Images of Patients With Advanced Cancer.

Clin Orthop Relat Res. 2023-11-1

[4]
DDoS classification of network traffic in software defined networking SDN using a hybrid convolutional and gated recurrent neural network.

Sci Rep. 2025-8-9

[5]
Deep Learning for the Early Detection of Invasive Ductal Carcinoma in Histopathological Images: Convolutional Neural Network Approach With Transfer Learning.

JMIR Form Res. 2025-8-21

[6]
Artificial intelligence for diagnosing exudative age-related macular degeneration.

Cochrane Database Syst Rev. 2024-10-17

[7]
Optical coherence tomography (OCT) for detection of macular oedema in patients with diabetic retinopathy.

Cochrane Database Syst Rev. 2015-1-7

[8]
KidneyNeXt: A Lightweight Convolutional Neural Network for Multi-Class Renal Tumor Classification in Computed Tomography Imaging.

J Clin Med. 2025-7-11

[9]
Prescription of Controlled Substances: Benefits and Risks

2025-1

[10]
A deep learning approach to direct immunofluorescence pattern recognition in autoimmune bullous diseases.

Br J Dermatol. 2024-7-16

本文引用的文献

[1]
A Low Complexity Efficient Deep Learning Model for Automated Retinal Disease Diagnosis.

J Healthc Inform Res. 2025-1-3

[2]
Optimising deep learning models for ophthalmological disorder classification.

Sci Rep. 2025-1-24

[3]
A dataset of color fundus images for the detection and classification of eye diseases.

Data Brief. 2024-10-4

[4]
Economic Burden of Late-Stage Age-Related Macular Degeneration in Bulgaria, Germany, and the US.

JAMA Ophthalmol. 2024-12-1

[5]
Advances and prospects of multi-modal ophthalmic artificial intelligence based on deep learning: a review.

Eye Vis (Lond). 2024-10-1

[6]
Grad-CAM-Based Investigation into Acute-Stage Fluorescein Angiography Images to Predict Long-Term Visual Prognosis of Branch Retinal Vein Occlusion.

J Clin Med. 2024-9-5

[7]
A deep learning framework for the early detection of multi-retinal diseases.

PLoS One. 2024

[8]
Deep Learning and Machine Learning Algorithms for Retinal Image Analysis in Neurodegenerative Disease: Systematic Review of Datasets and Models.

Transl Vis Sci Technol. 2024-2-1

[9]
Artificial intelligence in ophthalmology: The path to the real-world clinic.

Cell Rep Med. 2023-7-18

[10]
Medical image data augmentation: techniques, comparisons and interpretations.

Artif Intell Rev. 2023-3-20

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索