• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于皮肤癌检测的移动应用程序容易受到基于物理摄像头的对抗性攻击。

Mobile applications for skin cancer detection are vulnerable to physical camera-based adversarial attacks.

作者信息

Oda Junsei, Takemoto Kazuhiro

机构信息

Department of Bioscience and Bioinformatics, Kyushu Institute of Technology, Iizuka, Fukuoka, Japan.

Data Science and AI Research Center, Kyushu Institute of Technology, Iizuka, Fukuoka, Japan.

出版信息

Sci Rep. 2025 May 24;15(1):18119. doi: 10.1038/s41598-025-03546-y.

DOI:10.1038/s41598-025-03546-y
PMID:40413255
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12103594/
Abstract

Skin cancer is one of the most prevalent malignant tumors, and early detection is crucial for patient prognosis, leading to the development of mobile applications as screening tools. Recent advances in deep neural networks (DNNs) have accelerated the deployment of DNN-based applications for automated skin cancer detection. While DNNs have demonstrated remarkable capabilities, they are known to be vulnerable to adversarial attacks, where carefully crafted perturbations can manipulate model predictions. The vulnerability of deployed medical mobile applications to such attacks remains largely unexplored under real-world conditions. Here, we investigate the susceptibility of three DNN-based medical mobile applications to physical adversarial attacks using transparent camera stickers under black-box conditions where internal model architectures are inaccessible. Through digital experiments with various DNN architectures trained on a publicly available skin lesion dataset, we first demonstrate that camera-based adversarial patterns can achieve high transferability across different models. Using these findings, we implement physical attacks by attaching optimized transparent stickers to mobile device cameras. Our results show that these attacks successfully manipulate application predictions, particularly for melanoma images, with attack success rates reaching 50-80% across all applications while maintaining visual imperceptibility. Notably, melanoma images showed consistently higher vulnerability compared to nevus images across all tested applications. To the best of our knowledge, this is the first demonstration of real-world adversarial vulnerabilities in deployed medical mobile applications, revealing significant security concerns where prediction manipulation could affect diagnostic processes. Our study demonstrates the importance of security evaluation in deploying such applications in clinical settings.

摘要

皮肤癌是最常见的恶性肿瘤之一,早期检测对患者预后至关重要,这促使了作为筛查工具的移动应用程序的开发。深度神经网络(DNN)的最新进展加速了基于DNN的自动皮肤癌检测应用程序的部署。虽然DNN已展现出卓越的能力,但众所周知,它们容易受到对抗性攻击,即精心设计的扰动可以操纵模型预测。在现实世界条件下,已部署的医疗移动应用程序对此类攻击的脆弱性在很大程度上仍未得到探索。在此,我们在黑盒条件下(即无法访问内部模型架构),使用透明相机贴纸研究了三款基于DNN的医疗移动应用程序对物理对抗性攻击的敏感性。通过在公开可用的皮肤病变数据集上训练的各种DNN架构进行数字实验,我们首先证明基于相机的对抗性模式可以在不同模型之间实现高转移性。利用这些发现,我们通过在移动设备相机上粘贴优化后的透明贴纸来实施物理攻击。我们的结果表明,这些攻击成功地操纵了应用程序预测,特别是对于黑色素瘤图像,所有应用程序的攻击成功率达到50 - 80%,同时保持视觉上的不可察觉性。值得注意的是,在所有测试应用程序中,与痣图像相比,黑色素瘤图像始终表现出更高的脆弱性。据我们所知,这是首次展示已部署的医疗移动应用程序在现实世界中的对抗性漏洞,揭示了预测操纵可能影响诊断过程的重大安全问题。我们的研究证明了在临床环境中部署此类应用程序时进行安全评估的重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3818/12103594/a249b13c873c/41598_2025_3546_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3818/12103594/4e294cdedddd/41598_2025_3546_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3818/12103594/ea4b3722e232/41598_2025_3546_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3818/12103594/a249b13c873c/41598_2025_3546_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3818/12103594/4e294cdedddd/41598_2025_3546_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3818/12103594/ea4b3722e232/41598_2025_3546_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3818/12103594/a249b13c873c/41598_2025_3546_Fig3_HTML.jpg

相似文献

1
Mobile applications for skin cancer detection are vulnerable to physical camera-based adversarial attacks.用于皮肤癌检测的移动应用程序容易受到基于物理摄像头的对抗性攻击。
Sci Rep. 2025 May 24;15(1):18119. doi: 10.1038/s41598-025-03546-y.
2
Universal adversarial attacks on deep neural networks for medical image classification.针对医学图像分类的深度神经网络的通用对抗攻击。
BMC Med Imaging. 2021 Jan 7;21(1):9. doi: 10.1186/s12880-020-00530-y.
3
Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging.评估并增强视觉Transformer在医学成像中抵御对抗攻击的鲁棒性。
Med Biol Eng Comput. 2025 Mar;63(3):673-690. doi: 10.1007/s11517-024-03226-5. Epub 2024 Oct 25.
4
Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.对抗攻击对医学影像分析系统的漏洞:未知因素。
Med Image Anal. 2021 Oct;73:102141. doi: 10.1016/j.media.2021.102141. Epub 2021 Jun 18.
5
Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning.自然图像可对使用迁移学习的深度神经网络的医学图像分类进行通用对抗攻击。
J Imaging. 2022 Feb 4;8(2):38. doi: 10.3390/jimaging8020038.
6
Remix: Towards the transferability of adversarial examples.对抗样本的可迁移性研究
Neural Netw. 2023 Jun;163:367-378. doi: 10.1016/j.neunet.2023.04.012. Epub 2023 Apr 18.
7
Auto encoder-based defense mechanism against popular adversarial attacks in deep learning.基于自动编码器的深度学习中流行对抗攻击防御机制。
PLoS One. 2024 Oct 21;19(10):e0307363. doi: 10.1371/journal.pone.0307363. eCollection 2024.
8
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.在测试时不对分类进行分类:DNN 分类器的攻击异常检测(ADA)。
Neural Comput. 2019 Aug;31(8):1624-1670. doi: 10.1162/neco_a_01209. Epub 2019 Jul 1.
9
Physical Adversarial Attack Meets Computer Vision: A Decade Survey.物理对抗攻击与计算机视觉:十年综述
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9797-9817. doi: 10.1109/TPAMI.2024.3430860. Epub 2024 Nov 7.
10
Model Compression Hardens Deep Neural Networks: A New Perspective to Prevent Adversarial Attacks.模型压缩强化深度神经网络:防止对抗攻击的新视角。
IEEE Trans Neural Netw Learn Syst. 2023 Jan;34(1):3-14. doi: 10.1109/TNNLS.2021.3089128. Epub 2023 Jan 5.

本文引用的文献

1
Auto encoder-based defense mechanism against popular adversarial attacks in deep learning.基于自动编码器的深度学习中流行对抗攻击防御机制。
PLoS One. 2024 Oct 21;19(10):e0307363. doi: 10.1371/journal.pone.0307363. eCollection 2024.
2
Physical Adversarial Attack Meets Computer Vision: A Decade Survey.物理对抗攻击与计算机视觉:十年综述
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9797-9817. doi: 10.1109/TPAMI.2024.3430860. Epub 2024 Nov 7.
3
Development and Validation of a Deep Learning Model to Reduce the Interference of Rectal Artifacts in MRI-based Prostate Cancer Diagnosis.
深度学习模型的开发与验证:减少 MRI 前列腺癌诊断中直肠伪影的干扰。
Radiol Artif Intell. 2024 Mar;6(2):e230362. doi: 10.1148/ryai.230362.
4
Cancer statistics, 2024.2024年癌症统计数据。
CA Cancer J Clin. 2024 Jan-Feb;74(1):12-49. doi: 10.3322/caac.21820. Epub 2024 Jan 17.
5
Risk of Bias in Chest Radiography Deep Learning Foundation Models.胸部X光深度学习基础模型中的偏倚风险
Radiol Artif Intell. 2023 Sep 27;5(6):e230060. doi: 10.1148/ryai.230060. eCollection 2023 Nov.
6
Adversarial attacks in radiology - A systematic review.放射学中的对抗攻击——系统综述
Eur J Radiol. 2023 Oct;167:111085. doi: 10.1016/j.ejrad.2023.111085. Epub 2023 Sep 9.
7
Frequency constraint-based adversarial attack on deep neural networks for medical image classification.基于频率约束的深度神经网络对抗攻击在医学图像分类中的应用
Comput Biol Med. 2023 Sep;164:107248. doi: 10.1016/j.compbiomed.2023.107248. Epub 2023 Jul 25.
8
Increasing-Margin Adversarial (IMA) training to improve adversarial robustness of neural networks.基于增加间隔的对抗(IMA)训练来提高神经网络的对抗鲁棒性。
Comput Methods Programs Biomed. 2023 Oct;240:107687. doi: 10.1016/j.cmpb.2023.107687. Epub 2023 Jun 24.
9
An artificial intelligence based app for skin cancer detection evaluated in a population based setting.一款基于人工智能的皮肤癌检测应用程序在基于人群的环境中进行了评估。
NPJ Digit Med. 2023 May 20;6(1):90. doi: 10.1038/s41746-023-00831-w.
10
New AI-algorithms on smartphones to detect skin cancer in a clinical setting-A validation study.智能手机上新的人工智能算法可用于临床皮肤癌检测——一项验证研究。
PLoS One. 2023 Feb 15;18(2):e0280670. doi: 10.1371/journal.pone.0280670. eCollection 2023.