Zbrzezny Agnieszka M, Grzybowski Andrzej E
Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland.
Faculty of Design, SWPS University of Social Sciences and Humanities, Chodakowska 19/31, 03-815 Warsaw, Poland.
J Clin Med. 2023 May 4;12(9):3266. doi: 10.3390/jcm12093266.
The artificial intelligence (AI) systems used for diagnosing ophthalmic diseases have significantly progressed in recent years. The diagnosis of difficult eye conditions, such as cataracts, diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity, has become significantly less complicated as a result of the development of AI algorithms, which are currently on par with ophthalmologists in terms of their level of effectiveness. However, in the context of building AI systems for medical applications such as identifying eye diseases, addressing the challenges of safety and trustworthiness is paramount, including the emerging threat of adversarial attacks. Research has increasingly focused on understanding and mitigating these attacks, with numerous articles discussing this topic in recent years. As a starting point for our discussion, we used the paper by Ma et al. "Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems". A literature review was performed for this study, which included a thorough search of open-access research papers using online sources (PubMed and Google). The research provides examples of unique attack strategies for medical images. Unfortunately, unique algorithms for attacks on the various ophthalmic image types have yet to be developed. It is a task that needs to be performed. As a result, it is necessary to build algorithms that validate the computation and explain the findings of artificial intelligence models. In this article, we focus on adversarial attacks, one of the most well-known attack methods, which provide evidence (i.e., adversarial examples) of the lack of resilience of decision models that do not include provable guarantees. Adversarial attacks have the potential to provide inaccurate findings in deep learning systems and can have catastrophic effects in the healthcare industry, such as healthcare financing fraud and wrong diagnosis.
近年来,用于诊断眼科疾病的人工智能(AI)系统取得了显著进展。由于AI算法的发展,诸如白内障、糖尿病视网膜病变、年龄相关性黄斑变性、青光眼和早产儿视网膜病变等疑难眼病的诊断变得不再那么复杂,目前AI算法在有效性方面与眼科医生不相上下。然而,在构建用于识别眼病等医疗应用的AI系统时,应对安全性和可信度挑战至关重要,其中包括对抗性攻击这一新兴威胁。研究越来越关注理解和减轻这些攻击,近年来有大量文章讨论了这一话题。作为我们讨论的起点,我们采用了Ma等人所著的《理解对基于深度学习的医学图像分析系统的对抗性攻击》这篇论文。本研究进行了文献综述,包括使用在线资源(PubMed和谷歌)对开放获取的研究论文进行全面搜索。该研究提供了针对医学图像的独特攻击策略示例。遗憾的是,尚未开发出针对各种眼科图像类型的独特攻击算法。这是一项需要完成的任务。因此,有必要构建算法来验证计算并解释人工智能模型的结果。在本文中,我们聚焦于对抗性攻击,这是最著名的攻击方法之一,它为不包含可证明保证的决策模型缺乏弹性提供了证据(即对抗性示例)。对抗性攻击有可能在深度学习系统中提供不准确的结果,并可能在医疗行业产生灾难性影响,如医疗融资欺诈和错误诊断。