• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

K-匿名启发的对抗攻击与多类单分类防御。

K-Anonymity inspired adversarial attack and multiple one-class classification defense.

机构信息

Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki, Greece.

出版信息

Neural Netw. 2020 Apr;124:296-307. doi: 10.1016/j.neunet.2020.01.015. Epub 2020 Feb 6.

DOI:10.1016/j.neunet.2020.01.015
PMID:32036227
Abstract

A novel adversarial attack methodology for fooling deep neural network classifiers in image classification tasks is proposed, along with a novel defense mechanism to counter such attacks. Two concepts are introduced, namely the K-Anonymity-inspired Adversarial Attack (K-A) and the Multiple Support Vector Data Description Defense (M-SVDD-D). The proposed K-A introduces novel optimization criteria to standard adversarial attack methodologies, inspired by the K-Anonymity principles. Its generated adversarial examples are not only misclassified by the neural network classifier, but are uniformly spread along K different ranked output positions. The proposed M-SVDD-D consists of a deep neural architecture layer consisting of multiple non-linear one-class classifiers based on Support Vector Data Description that can be used to replace the final linear classification layer of a deep neural architecture, and an additional class verification mechanism. Its application decreases the effectiveness of adversarial attacks, by increasing the noise energy required to deceive the protected model, attributed to the introduced non-linearity. In addition, M-SVDD-D can be used to prevent adversarial attacks in black-box attack settings.

摘要

提出了一种新颖的对抗攻击方法,用于在图像分类任务中愚弄深度神经网络分类器,并提出了一种新颖的防御机制来对抗这种攻击。引入了两个概念,即 K 匿名启发式对抗攻击(K-A)和多个支持向量数据描述防御(M-SVDD-D)。所提出的 K-A 引入了新的优化标准,以标准对抗攻击方法为灵感,来自 K 匿名原则。它生成的对抗示例不仅被神经网络分类器错误分类,而且沿着 K 个不同的排名输出位置均匀分布。所提出的 M-SVDD-D 由一个深度神经网络架构层组成,该层由多个基于支持向量数据描述的非线性单类分类器组成,可以用于替换深度神经网络架构的最后一个线性分类层,以及一个附加的类验证机制。它的应用通过增加欺骗受保护模型所需的噪声能量来降低对抗攻击的有效性,这归因于引入的非线性。此外,M-SVDD-D 可用于在黑盒攻击设置中防止对抗攻击。

相似文献

1
K-Anonymity inspired adversarial attack and multiple one-class classification defense.K-匿名启发的对抗攻击与多类单分类防御。
Neural Netw. 2020 Apr;124:296-307. doi: 10.1016/j.neunet.2020.01.015. Epub 2020 Feb 6.
2
Uni-image: Universal image construction for robust neural model.Uni-image:用于稳健神经模型的通用图像构建。
Neural Netw. 2020 Aug;128:279-287. doi: 10.1016/j.neunet.2020.05.018. Epub 2020 May 21.
3
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.在测试时不对分类进行分类:DNN 分类器的攻击异常检测(ADA)。
Neural Comput. 2019 Aug;31(8):1624-1670. doi: 10.1162/neco_a_01209. Epub 2019 Jul 1.
4
ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers.ABC攻击:一种用于欺骗深度图像分类器的无梯度优化黑盒攻击。
Entropy (Basel). 2022 Mar 15;24(3):412. doi: 10.3390/e24030412.
5
Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences.使用边缘计数序列之间的弹性相似性度量来进行对抗攻击的鲁棒图像分类。
Neural Netw. 2020 Aug;128:61-72. doi: 10.1016/j.neunet.2020.04.030. Epub 2020 Apr 30.
6
Vulnerability of classifiers to evolutionary generated adversarial examples.分类器对进化生成对抗样例的脆弱性。
Neural Netw. 2020 Jul;127:168-181. doi: 10.1016/j.neunet.2020.04.015. Epub 2020 Apr 20.
7
Towards evaluating the robustness of deep diagnostic models by adversarial attack.通过对抗攻击评估深度诊断模型的稳健性。
Med Image Anal. 2021 Apr;69:101977. doi: 10.1016/j.media.2021.101977. Epub 2021 Jan 22.
8
Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.隐私保护的黑盒分类器对抗在线对抗攻击。
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9503-9520. doi: 10.1109/TPAMI.2021.3125931. Epub 2022 Nov 7.
9
Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.对抗攻击对医学影像分析系统的漏洞:未知因素。
Med Image Anal. 2021 Oct;73:102141. doi: 10.1016/j.media.2021.102141. Epub 2021 Jun 18.
10
Adversarial-Aware Deep Learning System Based on a Secondary Classical Machine Learning Verification Approach.
Sensors (Basel). 2023 Jul 11;23(14):6287. doi: 10.3390/s23146287.

引用本文的文献

1
Opportunities and challenges of artificial intelligence in the medical field: current application, emerging problems, and problem-solving strategies.人工智能在医学领域的机遇与挑战:当前应用、新兴问题及解决策略。
J Int Med Res. 2021 Mar;49(3):3000605211000157. doi: 10.1177/03000605211000157.