Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan.
Department of Computer Science, Faculty of Engineering & Informatics, University of Bradford, Bradford BD7 1DP, UK.
Sensors (Basel). 2021 Jun 7;21(11):3922. doi: 10.3390/s21113922.
Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.
由于人工智能 (AI) 和深度学习 (DL) 方法的快速发展,需要保证部署的算法的安全性和鲁棒性。已经广泛认识到 DL 算法对对抗样本的易感性。人为创建的示例会导致被 DL 模型错误识别为良性的不同实例。在具有对抗威胁的实际物理场景中的实际应用展示了它们的特征。因此,对抗攻击和防御,包括机器学习及其可靠性,引起了越来越多的关注,并且近年来已成为研究的热门话题。我们引入了一个框架,该框架提供了一种针对对抗斑点噪声攻击、对抗训练和特征融合策略的防御模型,该策略保留了正确标记的分类。我们评估和分析了用于糖尿病视网膜病变识别问题的视网膜图像上的对抗攻击和防御,这被认为是一项最先进的工作。在易受对抗攻击影响的视网膜图像上获得的结果准确率达到 99%,证明了所提出的防御模型具有鲁棒性。