• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

迈向具有早期退出集成的对抗鲁棒性。

Towards Adversarial Robustness with Early Exit Ensembles.

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:313-316. doi: 10.1109/EMBC48229.2022.9871347.

DOI:10.1109/EMBC48229.2022.9871347
PMID:36086386
Abstract

Deep learning techniques are increasingly used for decision-making in health applications, however, these can easily be manipulated by adversarial examples across different clinical domains. Their security and privacy vulnerabilities raise concerns about the practical deployment of these systems. The number and variety of the adversarial attacks grow continuously, making it difficult for mitigation approaches to provide effective solutions. Current mitigation techniques often rely on expensive re-training procedures as new attacks emerge. In this paper, we propose a novel adversarial mitigation technique for biosignal classification tasks. Our approach is based on recent findings interpreting early exit neural networks as an ensemble of weight sharing sub-networks. Our experiments on state-of-the-art deep learning models show that early exit ensembles can provide robustness generalizable to various white box and universal adversarial attacks. The approach increases the accuracy of vulnerable deep learning models up to 60 percentage points, while providing adversarial mitigation comparable to adversarial training. This is achieved without previous exposure to the adversarial perturbation or the computational burden of re-training.

摘要

深度学习技术在医疗应用的决策中得到了越来越广泛的应用,但这些技术很容易被来自不同临床领域的对抗样本所操纵。其安全和隐私漏洞引起了人们对这些系统实际部署的关注。对抗攻击的数量和种类不断增加,使得缓解措施难以提供有效的解决方案。当前的缓解技术通常依赖于昂贵的重新训练过程,因为新的攻击不断出现。在本文中,我们提出了一种用于生物信号分类任务的新的对抗缓解技术。我们的方法基于最近的研究结果,将提前退出神经网络解释为权重共享子网络的集合。我们在最先进的深度学习模型上的实验表明,提前退出的集合可以提供对各种白盒和通用对抗攻击的可推广的鲁棒性。该方法将脆弱的深度学习模型的准确率提高了 60 个百分点,同时提供了与对抗训练相当的对抗缓解。这是在没有先前对抗性扰动或重新训练的计算负担的情况下实现的。

相似文献

1
Towards Adversarial Robustness with Early Exit Ensembles.迈向具有早期退出集成的对抗鲁棒性。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:313-316. doi: 10.1109/EMBC48229.2022.9871347.
2
Towards evaluating the robustness of deep diagnostic models by adversarial attack.通过对抗攻击评估深度诊断模型的稳健性。
Med Image Anal. 2021 Apr;69:101977. doi: 10.1016/j.media.2021.101977. Epub 2021 Jan 22.
3
Towards improving fast adversarial training in multi-exit network.针对多出口网络中快速对抗训练的改进。
Neural Netw. 2022 Jun;150:1-11. doi: 10.1016/j.neunet.2022.02.015. Epub 2022 Feb 25.
4
Universal adversarial attacks on deep neural networks for medical image classification.针对医学图像分类的深度神经网络的通用对抗攻击。
BMC Med Imaging. 2021 Jan 7;21(1):9. doi: 10.1186/s12880-020-00530-y.
5
Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.隐私保护的黑盒分类器对抗在线对抗攻击。
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9503-9520. doi: 10.1109/TPAMI.2021.3125931. Epub 2022 Nov 7.
6
A Feature Space-Restricted Attention Attack on Medical Deep Learning Systems.一种针对医学深度学习系统的特征空间受限注意力攻击。
IEEE Trans Cybern. 2023 Aug;53(8):5323-5335. doi: 10.1109/TCYB.2022.3209175. Epub 2023 Jul 18.
7
Adversarial attacks and adversarial robustness in computational pathology.计算病理学中的对抗攻击和对抗鲁棒性。
Nat Commun. 2022 Sep 29;13(1):5711. doi: 10.1038/s41467-022-33266-0.
8
Improving the robustness and accuracy of biomedical language models through adversarial training.通过对抗训练提高生物医学语言模型的稳健性和准确性。
J Biomed Inform. 2022 Aug;132:104114. doi: 10.1016/j.jbi.2022.104114. Epub 2022 Jun 15.
9
SMGEA: A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories.SMGEA:一种由长期梯度记忆驱动的新型集成对抗攻击。
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1051-1065. doi: 10.1109/TNNLS.2020.3039295. Epub 2022 Feb 28.
10
Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.对抗攻击对医学影像分析系统的漏洞:未知因素。
Med Image Anal. 2021 Oct;73:102141. doi: 10.1016/j.media.2021.102141. Epub 2021 Jun 18.