Oda Junsei, Takemoto Kazuhiro
Department of Bioscience and Bioinformatics, Kyushu Institute of Technology, Iizuka, Fukuoka, Japan.
Data Science and AI Research Center, Kyushu Institute of Technology, Iizuka, Fukuoka, Japan.
Sci Rep. 2025 May 24;15(1):18119. doi: 10.1038/s41598-025-03546-y.
Skin cancer is one of the most prevalent malignant tumors, and early detection is crucial for patient prognosis, leading to the development of mobile applications as screening tools. Recent advances in deep neural networks (DNNs) have accelerated the deployment of DNN-based applications for automated skin cancer detection. While DNNs have demonstrated remarkable capabilities, they are known to be vulnerable to adversarial attacks, where carefully crafted perturbations can manipulate model predictions. The vulnerability of deployed medical mobile applications to such attacks remains largely unexplored under real-world conditions. Here, we investigate the susceptibility of three DNN-based medical mobile applications to physical adversarial attacks using transparent camera stickers under black-box conditions where internal model architectures are inaccessible. Through digital experiments with various DNN architectures trained on a publicly available skin lesion dataset, we first demonstrate that camera-based adversarial patterns can achieve high transferability across different models. Using these findings, we implement physical attacks by attaching optimized transparent stickers to mobile device cameras. Our results show that these attacks successfully manipulate application predictions, particularly for melanoma images, with attack success rates reaching 50-80% across all applications while maintaining visual imperceptibility. Notably, melanoma images showed consistently higher vulnerability compared to nevus images across all tested applications. To the best of our knowledge, this is the first demonstration of real-world adversarial vulnerabilities in deployed medical mobile applications, revealing significant security concerns where prediction manipulation could affect diagnostic processes. Our study demonstrates the importance of security evaluation in deploying such applications in clinical settings.
皮肤癌是最常见的恶性肿瘤之一,早期检测对患者预后至关重要,这促使了作为筛查工具的移动应用程序的开发。深度神经网络(DNN)的最新进展加速了基于DNN的自动皮肤癌检测应用程序的部署。虽然DNN已展现出卓越的能力,但众所周知,它们容易受到对抗性攻击,即精心设计的扰动可以操纵模型预测。在现实世界条件下,已部署的医疗移动应用程序对此类攻击的脆弱性在很大程度上仍未得到探索。在此,我们在黑盒条件下(即无法访问内部模型架构),使用透明相机贴纸研究了三款基于DNN的医疗移动应用程序对物理对抗性攻击的敏感性。通过在公开可用的皮肤病变数据集上训练的各种DNN架构进行数字实验,我们首先证明基于相机的对抗性模式可以在不同模型之间实现高转移性。利用这些发现,我们通过在移动设备相机上粘贴优化后的透明贴纸来实施物理攻击。我们的结果表明,这些攻击成功地操纵了应用程序预测,特别是对于黑色素瘤图像,所有应用程序的攻击成功率达到50 - 80%,同时保持视觉上的不可察觉性。值得注意的是,在所有测试应用程序中,与痣图像相比,黑色素瘤图像始终表现出更高的脆弱性。据我们所知,这是首次展示已部署的医疗移动应用程序在现实世界中的对抗性漏洞,揭示了预测操纵可能影响诊断过程的重大安全问题。我们的研究证明了在临床环境中部署此类应用程序时进行安全评估的重要性。