Cheng Yupeng, Guo Qing, Juefei-Xu Felix, Fu Huazhu, Lin Shang-Wei, Lin Weisi
IEEE J Biomed Health Inform. 2025 Jan;29(1):297-309. doi: 10.1109/JBHI.2024.3469630. Epub 2025 Jan 7.
Diabetic Retinopathy (DR) is a leading cause of vision loss around the world. To help diagnose it, numerous cutting-edge works have built powerful deep neural networks (DNNs) to automatically grade DR via retinal fundus images (RFIs). However, RFIs are commonly affected by camera exposure issues that may lead to incorrect grades. The mis-graded results can potentially pose high risks to an aggravation of the condition. In this paper, we study this problem from the viewpoint of adversarial attacks. We identify and introduce a novel solution to an entirely new task, termed as adversarial exposure attack, which is able to produce natural exposure images and mislead the state-of-the-art DNNs. We validate our proposed method on a real-world public DR dataset with three DNNs, e.g., ResNet50, MobileNet, and EfficientNet, demonstrating that our method achieves high image quality and success rate in transferring the attacks. Our method reveals the potential threats to DNN-based automatic DR grading and would benefit the development of exposure-robust DR grading methods in the future.
糖尿病视网膜病变(DR)是全球视力丧失的主要原因。为了帮助诊断DR,众多前沿研究构建了强大的深度神经网络(DNN),通过视网膜眼底图像(RFI)自动对DR进行分级。然而,RFI通常会受到相机曝光问题的影响,这可能导致分级错误。分级错误的结果可能会给病情加重带来高风险。在本文中,我们从对抗攻击的角度研究这个问题。我们识别并引入了一种针对全新任务的新颖解决方案,称为对抗曝光攻击,它能够生成自然曝光图像并误导最先进的DNN。我们在一个真实世界的公共DR数据集上使用三个DNN(例如ResNet50、MobileNet和EfficientNet)验证了我们提出的方法,证明我们的方法在转移攻击方面实现了高图像质量和成功率。我们的方法揭示了基于DNN的自动DR分级面临的潜在威胁,并将有助于未来开发抗曝光的DR分级方法。