Suppr超能文献

深度神经网络中稀疏分布表示对对抗攻击的鲁棒性

Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks.

作者信息

Sardar Nida, Khan Sundas, Hintze Arend, Mehra Priyanka

机构信息

Department for MicroData Analytics, Dalarna University, 791 88 Falun, Sweden.

BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI 48824, USA.

出版信息

Entropy (Basel). 2023 Jun 13;25(6):933. doi: 10.3390/e25060933.

Abstract

Deep learning models have achieved an impressive performance in a variety of tasks, but they often suffer from overfitting and are vulnerable to adversarial attacks. Previous research has shown that dropout regularization is an effective technique that can improve model generalization and robustness. In this study, we investigate the impact of dropout regularization on the ability of neural networks to withstand adversarial attacks, as well as the degree of "functional smearing" between individual neurons in the network. Functional smearing in this context describes the phenomenon that a neuron or hidden state is involved in multiple functions at the same time. Our findings confirm that dropout regularization can enhance a network's resistance to adversarial attacks, and this effect is only observable within a specific range of dropout probabilities. Furthermore, our study reveals that dropout regularization significantly increases the distribution of functional smearing across a wide range of dropout rates. However, it is the fraction of networks with lower levels of functional smearing that exhibit greater resilience against adversarial attacks. This suggests that, even though dropout improves robustness to fooling, one should instead try to decrease functional smearing.

摘要

深度学习模型在各种任务中都取得了令人瞩目的性能,但它们经常遭受过拟合问题,并且容易受到对抗攻击。先前的研究表明,随机失活正则化是一种有效的技术,可以提高模型的泛化能力和鲁棒性。在本研究中,我们研究了随机失活正则化对神经网络抵御对抗攻击能力的影响,以及网络中单个神经元之间的“功能涂抹”程度。在此背景下,功能涂抹描述了一个神经元或隐藏状态同时参与多种功能的现象。我们的研究结果证实,随机失活正则化可以增强网络对对抗攻击的抵抗力,并且这种效果仅在特定的随机失活概率范围内可观察到。此外,我们的研究表明,随机失活正则化在广泛的随机失活率范围内显著增加了功能涂抹的分布。然而,功能涂抹水平较低的网络部分表现出对对抗攻击更强的恢复力。这表明,尽管随机失活提高了对欺骗的鲁棒性,但人们应该尝试减少功能涂抹。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e64/10297406/12f4360ba951/entropy-25-00933-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验