• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

SPLASH:用于提高准确性和对抗鲁棒性的可学习激活函数。

SPLASH: Learnable activation functions for improving accuracy and adversarial robustness.

机构信息

Department of Computer Science, University of California, Irvine, United States of America.

Department of Computer Science and Engineering, University of South Carolina, United States of America.

出版信息

Neural Netw. 2021 Aug;140:1-12. doi: 10.1016/j.neunet.2021.02.023. Epub 2021 Mar 4.

DOI:10.1016/j.neunet.2021.02.023
PMID:33743319
Abstract

We introduce SPLASH units, a class of learnable activation functions shown to simultaneously improve the accuracy of deep neural networks while also improving their robustness to adversarial attacks. SPLASH units have both a simple parameterization and maintain the ability to approximate a wide range of non-linear functions. SPLASH units are: (1) continuous; (2) grounded (f(0)=0); (3) use symmetric hinges; and (4) their hinges are placed at fixed locations which are derived from the data (i.e. no learning required). Compared to nine other learned and fixed activation functions, including ReLU and its variants, SPLASH units show superior performance across three datasets (MNIST, CIFAR-10, and CIFAR-100) and four architectures (LeNet5, All-CNN, ResNet-20, and Network-in-Network). Furthermore, we show that SPLASH units significantly increase the robustness of deep neural networks to adversarial attacks. Our experiments on both black-box and white-box adversarial attacks show that commonly-used architectures, namely LeNet5, All-CNN, Network-in-Network, and ResNet-20, can be up to 31% more robust to adversarial attacks by simply using SPLASH units instead of ReLUs. Finally, we show the benefits of using SPLASH activation functions in bigger architectures designed for non-trivial datasets such as ImageNet.

摘要

我们引入了 SPLASH 单元,这是一类可学习的激活函数,被证明可以同时提高深度神经网络的准确性,同时提高其对对抗攻击的鲁棒性。SPLASH 单元具有简单的参数化,并且能够近似广泛的非线性函数。SPLASH 单元具有以下特点:(1)连续;(2)有界(f(0)=0);(3)使用对称铰链;(4)铰链位于从数据中导出的固定位置(即无需学习)。与其他九个学习和固定激活函数(包括 ReLU 及其变体)相比,SPLASH 单元在三个数据集(MNIST、CIFAR-10 和 CIFAR-100)和四个架构(LeNet5、All-CNN、ResNet-20 和 Network-in-Network)上表现出更好的性能。此外,我们还表明,SPLASH 单元显著提高了深度神经网络对对抗攻击的鲁棒性。我们在黑盒和白盒对抗攻击上的实验表明,常见的架构,即 LeNet5、All-CNN、Network-in-Network 和 ResNet-20,通过简单地使用 SPLASH 单元代替 ReLUs,对对抗攻击的鲁棒性可以提高 31%。最后,我们展示了在更大的架构中使用 SPLASH 激活函数的好处,这些架构是为非平凡数据集(如 ImageNet)设计的。

相似文献

1
SPLASH: Learnable activation functions for improving accuracy and adversarial robustness.SPLASH:用于提高准确性和对抗鲁棒性的可学习激活函数。
Neural Netw. 2021 Aug;140:1-12. doi: 10.1016/j.neunet.2021.02.023. Epub 2021 Mar 4.
2
Parametric Deformable Exponential Linear Units for deep neural networks.参数变形指数线性单元在深度神经网络中的应用。
Neural Netw. 2020 May;125:281-289. doi: 10.1016/j.neunet.2020.02.012. Epub 2020 Feb 26.
3
Vulnerability of classifiers to evolutionary generated adversarial examples.分类器对进化生成对抗样例的脆弱性。
Neural Netw. 2020 Jul;127:168-181. doi: 10.1016/j.neunet.2020.04.015. Epub 2020 Apr 20.
4
Interpolated Adversarial Training: Achieving robust neural networks without sacrificing too much accuracy.插值对抗训练:在不牺牲太多准确性的情况下实现稳健的神经网络。
Neural Netw. 2022 Oct;154:218-233. doi: 10.1016/j.neunet.2022.07.012. Epub 2022 Jul 16.
5
Uni-image: Universal image construction for robust neural model.Uni-image:用于稳健神经模型的通用图像构建。
Neural Netw. 2020 Aug;128:279-287. doi: 10.1016/j.neunet.2020.05.018. Epub 2020 May 21.
6
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.在测试时不对分类进行分类:DNN 分类器的攻击异常检测(ADA)。
Neural Comput. 2019 Aug;31(8):1624-1670. doi: 10.1162/neco_a_01209. Epub 2019 Jul 1.
7
Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification.基于类间对抗训练提高图像分类对抗鲁棒性。
Sensors (Basel). 2023 Mar 20;23(6):3252. doi: 10.3390/s23063252.
8
Adversarial symmetric GANs: Bridging adversarial samples and adversarial networks.对抗对称 GANs:连接对抗样本和对抗网络。
Neural Netw. 2021 Jan;133:148-156. doi: 10.1016/j.neunet.2020.10.016. Epub 2020 Nov 6.
9
Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.隐私保护的黑盒分类器对抗在线对抗攻击。
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9503-9520. doi: 10.1109/TPAMI.2021.3125931. Epub 2022 Nov 7.
10
Training Robust Deep Neural Networks via Adversarial Noise Propagation.通过对抗噪声传播训练稳健的深度神经网络。
IEEE Trans Image Process. 2021;30:5769-5781. doi: 10.1109/TIP.2021.3082317.

引用本文的文献

1
Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets.比较不同卷积神经网络激活函数以及针对中小医疗数据集构建集成的方法。
Sensors (Basel). 2022 Aug 16;22(16):6129. doi: 10.3390/s22166129.