• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用混沌理论处理对抗性示例分类问题。

Approaching Adversarial Example Classification with Chaos Theory.

作者信息

Pedraza Anibal, Deniz Oscar, Bueno Gloria

机构信息

VISILAB, University of Castilla La Mancha, 13001 Ciudad Real, Spain.

出版信息

Entropy (Basel). 2020 Oct 24;22(11):1201. doi: 10.3390/e22111201.

DOI:10.3390/e22111201
PMID:33286969
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7712112/
Abstract

Adversarial examples are one of the most intriguing topics in modern deep learning. Imperceptible perturbations to the input can fool robust models. In relation to this problem, attack and defense methods are being developed almost on a daily basis. In parallel, efforts are being made to simply pointing out when an input image is an adversarial example. This can help prevent potential issues, as the failure cases are easily recognizable by humans. The proposal in this work is to study how chaos theory methods can help distinguish adversarial examples from regular images. Our work is based on the assumption that deep networks behave as chaotic systems, and adversarial examples are the main manifestation of it (in the sense that a slight input variation produces a totally different output). In our experiments, we show that the Lyapunov exponents (an established measure of chaoticity), which have been recently proposed for classification of adversarial examples, are not robust to image processing transformations that alter image entropy. Furthermore, we show that entropy can complement Lyapunov exponents in such a way that the discriminating power is significantly enhanced. The proposed method achieves 65% to 100% accuracy detecting adversarials with a wide range of attacks (for example: CW, PGD, Spatial, HopSkip) for the MNIST dataset, with similar results when entropy-changing image processing methods (such as Equalization, Speckle and Gaussian noise) are applied. This is also corroborated with two other datasets, Fashion-MNIST and CIFAR 19. These results indicate that classifiers can enhance their robustness against the adversarial phenomenon, being applied in a wide variety of conditions that potentially matches real world cases and also other threatening scenarios.

摘要

对抗样本是现代深度学习中最引人入胜的话题之一。对输入的不可察觉的扰动能够欺骗鲁棒模型。针对这个问题,攻击和防御方法几乎每天都在发展。与此同时,人们也在努力简单地指出输入图像何时是一个对抗样本。这有助于预防潜在问题,因为失败案例很容易被人类识别。这项工作中的提议是研究混沌理论方法如何有助于将对抗样本与正常图像区分开来。我们的工作基于这样一个假设,即深度网络表现为混沌系统,而对抗样本是其主要表现形式(从轻微的输入变化会产生完全不同的输出这个意义上来说)。在我们的实验中,我们表明,最近被提议用于对抗样本分类的李雅普诺夫指数,对于改变图像熵的图像处理变换并不鲁棒。此外,我们表明熵可以以一种显著增强判别力的方式补充李雅普诺夫指数。对于MNIST数据集,所提出的方法在检测具有广泛攻击(例如:CW、PGD、空间、跳跃搜索)的对抗样本时,准确率达到65%至100%,当应用改变熵的图像处理方法(如均衡化、散斑和高斯噪声)时也有类似结果。这也在另外两个数据集Fashion-MNIST和CIFAR 19上得到了证实。这些结果表明,分类器可以增强其对对抗现象的鲁棒性,适用于各种可能与现实世界情况以及其他威胁场景相匹配的条件。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/48e1538da69d/entropy-22-01201-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/e601658b3706/entropy-22-01201-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/2e5d181fbcd2/entropy-22-01201-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/b8d7b7d87273/entropy-22-01201-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/ff9ca2f93083/entropy-22-01201-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/2e462035aa14/entropy-22-01201-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/43770c82c248/entropy-22-01201-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/7a758d06045f/entropy-22-01201-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/7986d49723c4/entropy-22-01201-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/a8901af447e3/entropy-22-01201-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/ab2439aac1b0/entropy-22-01201-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/2a935b09b100/entropy-22-01201-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/eb103313d5d4/entropy-22-01201-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/caa8c39623af/entropy-22-01201-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/48e1538da69d/entropy-22-01201-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/e601658b3706/entropy-22-01201-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/2e5d181fbcd2/entropy-22-01201-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/b8d7b7d87273/entropy-22-01201-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/ff9ca2f93083/entropy-22-01201-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/2e462035aa14/entropy-22-01201-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/43770c82c248/entropy-22-01201-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/7a758d06045f/entropy-22-01201-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/7986d49723c4/entropy-22-01201-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/a8901af447e3/entropy-22-01201-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/ab2439aac1b0/entropy-22-01201-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/2a935b09b100/entropy-22-01201-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/eb103313d5d4/entropy-22-01201-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/caa8c39623af/entropy-22-01201-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b77f/7712112/48e1538da69d/entropy-22-01201-g014.jpg

相似文献

1
Approaching Adversarial Example Classification with Chaos Theory.用混沌理论处理对抗性示例分类问题。
Entropy (Basel). 2020 Oct 24;22(11):1201. doi: 10.3390/e22111201.
2
Adversarial example defense based on image reconstruction.基于图像重建的对抗样本防御。
PeerJ Comput Sci. 2021 Dec 24;7:e811. doi: 10.7717/peerj-cs.811. eCollection 2021.
3
DualFlow: Generating imperceptible adversarial examples by flow field and normalize flow-based model.双流:通过流场和基于归一化流的模型生成不可察觉的对抗样本。
Front Neurorobot. 2023 Feb 9;17:1129720. doi: 10.3389/fnbot.2023.1129720. eCollection 2023.
4
Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.增强视频识别模型的鲁棒性:稀疏对抗攻击及其他。
Neural Netw. 2024 Mar;171:127-143. doi: 10.1016/j.neunet.2023.11.056. Epub 2023 Nov 25.
5
Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.隐私保护的黑盒分类器对抗在线对抗攻击。
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9503-9520. doi: 10.1109/TPAMI.2021.3125931. Epub 2022 Nov 7.
6
Uni-image: Universal image construction for robust neural model.Uni-image:用于稳健神经模型的通用图像构建。
Neural Netw. 2020 Aug;128:279-287. doi: 10.1016/j.neunet.2020.05.018. Epub 2020 May 21.
7
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.通过注意力机制和对抗性逻辑对配对提高对抗鲁棒性
Front Artif Intell. 2022 Jan 27;4:752831. doi: 10.3389/frai.2021.752831. eCollection 2021.
8
Training Robust Deep Neural Networks via Adversarial Noise Propagation.通过对抗噪声传播训练稳健的深度神经网络。
IEEE Trans Image Process. 2021;30:5769-5781. doi: 10.1109/TIP.2021.3082317.
9
Adversarial Attack and Defense in Deep Ranking.深度排序中的对抗攻击与防御
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5306-5324. doi: 10.1109/TPAMI.2024.3365699. Epub 2024 Jul 2.
10
Adversarial Examples: Attacks and Defenses for Deep Learning.对抗样本:深度学习的攻击与防御。
IEEE Trans Neural Netw Learn Syst. 2019 Sep;30(9):2805-2824. doi: 10.1109/TNNLS.2018.2886017. Epub 2019 Jan 14.

引用本文的文献

1
Influence of Features on Accuracy of Anomaly Detection for an Energy Trading System.特征对能源交易系统异常检测准确性的影响。
Sensors (Basel). 2021 Jun 21;21(12):4237. doi: 10.3390/s21124237.

本文引用的文献

1
A simple method for detecting chaos in nature.一种检测自然界中混沌的简单方法。
Commun Biol. 2020 Jan 3;3:11. doi: 10.1038/s42003-019-0715-9. eCollection 2020.
2
Liapunov exponents from time series.来自时间序列的李雅普诺夫指数。
Phys Rev A Gen Phys. 1986 Dec;34(6):4971-4979. doi: 10.1103/physreva.34.4971.