• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
A System-Driven Taxonomy of Attacks and Defenses in Adversarial Machine Learning.对抗机器学习中基于系统驱动的攻击与防御分类法
IEEE Trans Emerg Top Comput Intell. 2020 Aug;4(4):450-467. doi: 10.1109/tetci.2020.2968933. Epub 2020 May 25.
2
Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS).捍卫防御者:基于对抗学习的防御策略,用于网络物理系统 (CPS) 中的基于学习的安全方法。
Sensors (Basel). 2023 Jun 9;23(12):5459. doi: 10.3390/s23125459.
3
Adversarial attacks against supervised machine learning based network intrusion detection systems.对抗攻击对基于监督机器学习的网络入侵检测系统的影响。
PLoS One. 2022 Oct 14;17(10):e0275971. doi: 10.1371/journal.pone.0275971. eCollection 2022.
4
Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition.对抗训练和特征融合在糖尿病视网膜病变识别中的对抗攻击和防御。
Sensors (Basel). 2021 Jun 7;21(11):3922. doi: 10.3390/s21113922.
5
Adversarial-Aware Deep Learning System Based on a Secondary Classical Machine Learning Verification Approach.
Sensors (Basel). 2023 Jul 11;23(14):6287. doi: 10.3390/s23146287.
6
Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.对抗攻击对医学影像分析系统的漏洞:未知因素。
Med Image Anal. 2021 Oct;73:102141. doi: 10.1016/j.media.2021.102141. Epub 2021 Jun 18.
7
Robustifying models against adversarial attacks by Langevin dynamics.通过 Langevin 动力学使模型免受对抗性攻击。
Neural Netw. 2021 May;137:1-17. doi: 10.1016/j.neunet.2020.12.024. Epub 2021 Jan 9.
8
Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics.通过可视化分析向对抗性机器学习解释漏洞
IEEE Trans Vis Comput Graph. 2020 Jan;26(1):1075-1085. doi: 10.1109/TVCG.2019.2934631. Epub 2019 Aug 26.
9
Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples.谨防黑箱:论近期针对对抗样本防御的稳健性
Entropy (Basel). 2021 Oct 18;23(10):1359. doi: 10.3390/e23101359.
10
Vulnerability of classifiers to evolutionary generated adversarial examples.分类器对进化生成对抗样例的脆弱性。
Neural Netw. 2020 Jul;127:168-181. doi: 10.1016/j.neunet.2020.04.015. Epub 2020 Apr 20.

引用本文的文献

1
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples.利用深度学习模型的认知不确定性来生成对抗样本。
Multimed Tools Appl. 2022;81(8):11479-11500. doi: 10.1007/s11042-022-12132-7. Epub 2022 Feb 18.
2
Preprocessing Pipelines including Block-Matching Convolutional Neural Network for Image Denoising to Robustify Deep Reidentification against Evasion Attacks.包括用于图像去噪的块匹配卷积神经网络的预处理管道,以增强深度重识别对逃避攻击的鲁棒性。
Entropy (Basel). 2021 Oct 3;23(10):1304. doi: 10.3390/e23101304.

本文引用的文献

1
Adversarial attacks on medical machine learning.对医学机器学习的对抗攻击。
Science. 2019 Mar 22;363(6433):1287-1289. doi: 10.1126/science.aaw4399.
2
Adversarial Examples: Attacks and Defenses for Deep Learning.对抗样本:深度学习的攻击与防御。
IEEE Trans Neural Netw Learn Syst. 2019 Sep;30(9):2805-2824. doi: 10.1109/TNNLS.2018.2886017. Epub 2019 Jan 14.
3
Randomized Prediction Games for Adversarial Machine Learning.对抗机器学习的随机预测游戏。
IEEE Trans Neural Netw Learn Syst. 2017 Nov;28(11):2466-2478. doi: 10.1109/TNNLS.2016.2593488.
4
Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing.药物遗传学中的隐私:华法林个体化给药的端到端案例研究。
Proc USENIX Secur Symp. 2014 Aug;2014:17-32.
5
Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.用于计算机辅助检测的深度卷积神经网络:卷积神经网络架构、数据集特征与迁移学习
IEEE Trans Med Imaging. 2016 May;35(5):1285-98. doi: 10.1109/TMI.2016.2528162. Epub 2016 Feb 11.
6
Adversarial Feature Selection Against Evasion Attacks.对抗逃避攻击的对抗特征选择。
IEEE Trans Cybern. 2016 Mar;46(3):766-77. doi: 10.1109/TCYB.2015.2415032. Epub 2015 Apr 21.
7
Human-level control through deep reinforcement learning.通过深度强化学习实现人类水平的控制。
Nature. 2015 Feb 26;518(7540):529-33. doi: 10.1038/nature14236.
8
Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition.人与计算机的较量:用于交通标志识别的机器学习算法基准测试。
Neural Netw. 2012 Aug;32:323-32. doi: 10.1016/j.neunet.2012.02.016. Epub 2012 Feb 20.
9
Improving generalization performance using double backpropagation.使用双反向传播提高泛化性能。
IEEE Trans Neural Netw. 1992;3(6):991-7. doi: 10.1109/72.165600.

对抗机器学习中基于系统驱动的攻击与防御分类法

A System-Driven Taxonomy of Attacks and Defenses in Adversarial Machine Learning.

作者信息

Sadeghi Koosha, Banerjee Ayan, Gupta Sandeep K S

机构信息

IMPACT lab (http://impact.asu.edu/), CIDSE, Arizona State University, Tempe, Arizona, USA, 85281.

出版信息

IEEE Trans Emerg Top Comput Intell. 2020 Aug;4(4):450-467. doi: 10.1109/tetci.2020.2968933. Epub 2020 May 25.

DOI:10.1109/tetci.2020.2968933
PMID:33748635
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7971418/
Abstract

Machine Learning (ML) algorithms, specifically supervised learning, are widely used in modern real-world applications, which utilize Computational Intelligence (CI) as their core technology, such as autonomous vehicles, assistive robots, and biometric systems. Attacks that cause misclassifications or mispredictions can lead to erroneous decisions resulting in unreliable operations. Designing robust ML with the ability to provide reliable results in the presence of such attacks has become a top priority in the field of adversarial machine learning. An essential characteristic for rapid development of robust ML is an arms race between attack and defense strategists. However, an important prerequisite for the arms race is access to a well-defined system model so that experiments can be repeated by independent researchers. This paper proposes a fine-grained system-driven taxonomy to specify ML applications and adversarial system models in an unambiguous manner such that independent researchers can replicate experiments and escalate the arms race to develop more evolved and robust ML applications. The paper provides taxonomies for: 1) the dataset, 2) the ML architecture, 3) the adversary's knowledge, capability, and goal, 4) adversary's strategy, and 5) the defense response. In addition, the relationships among these models and taxonomies are analyzed by proposing an adversarial machine learning cycle. The provided models and taxonomies are merged to form a comprehensive system-driven taxonomy, which represents the arms race between the ML applications and adversaries in recent years. The taxonomies encode best practices in the field and help evaluate and compare the contributions of research works and reveals gaps in the field.

摘要

机器学习(ML)算法,特别是监督学习,在现代实际应用中被广泛使用,这些应用以计算智能(CI)作为其核心技术,如自动驾驶车辆、辅助机器人和生物识别系统。导致错误分类或错误预测的攻击可能会导致错误决策,从而产生不可靠的操作。设计在存在此类攻击时能够提供可靠结果的鲁棒ML已成为对抗机器学习领域的首要任务。鲁棒ML快速发展的一个基本特征是攻击和防御策略制定者之间的军备竞赛。然而,军备竞赛的一个重要前提是能够获得一个定义明确的系统模型,以便独立研究人员能够重复实验。本文提出了一种细粒度的系统驱动分类法,以明确的方式指定ML应用和对抗系统模型,使独立研究人员能够复制实验并推动军备竞赛,以开发更先进、更鲁棒的ML应用。本文提供了以下分类法:1)数据集,2)ML架构,3)对手的知识、能力和目标,4)对手的策略,5)防御响应。此外,通过提出一个对抗机器学习周期来分析这些模型和分类法之间的关系。所提供的模型和分类法被合并形成一个全面的系统驱动分类法,它代表了近年来ML应用和对手之间的军备竞赛。这些分类法编码了该领域的最佳实践,有助于评估和比较研究工作的贡献,并揭示该领域的差距。