• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对抗鲁棒学习 熵正则化

Adversarially Robust Learning Entropic Regularization.

作者信息

Jagatap Gauri, Joshi Ameya, Chowdhury Animesh Basak, Garg Siddharth, Hegde Chinmay

机构信息

Electrical and Computer Engineering, New York University, New York, NY, United States.

出版信息

Front Artif Intell. 2022 Jan 4;4:780843. doi: 10.3389/frai.2021.780843. eCollection 2021.

DOI:10.3389/frai.2021.780843
PMID:35059637
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8764444/
Abstract

In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a specially designed distribution in the data space that assigns high probability to points with high loss and in the immediate neighborhood of training samples. Our proposed algorithms optimize this loss to seek adversarially robust valleys of the loss landscape. Our approach achieves competitive (or better) performance in terms of robust classification accuracy as compared to several state-of-the-art robust learning approaches on benchmark datasets such as MNIST and CIFAR-10.

摘要

在本文中,我们提出了一种新的算法家族ATENT,用于训练对抗鲁棒的深度神经网络。我们制定了一个新的损失函数,该函数配备了额外的熵正则化。我们的损失函数考虑了从数据空间中专门设计的分布中抽取的对抗样本的贡献,该分布将高概率分配给损失高的点以及训练样本的紧邻邻域中的点。我们提出的算法优化此损失,以寻找损失景观中的对抗鲁棒谷。与MNIST和CIFAR-10等基准数据集上的几种最新鲁棒学习方法相比,我们的方法在鲁棒分类准确率方面实现了有竞争力(或更好)的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4273/8764444/82b1168a15a4/frai-04-780843-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4273/8764444/0c4cf1d02db8/frai-04-780843-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4273/8764444/82b1168a15a4/frai-04-780843-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4273/8764444/0c4cf1d02db8/frai-04-780843-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4273/8764444/82b1168a15a4/frai-04-780843-g002.jpg

相似文献

1
Adversarially Robust Learning Entropic Regularization.对抗鲁棒学习 熵正则化
Front Artif Intell. 2022 Jan 4;4:780843. doi: 10.3389/frai.2021.780843. eCollection 2021.
2
Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification.基于类间对抗训练提高图像分类对抗鲁棒性。
Sensors (Basel). 2023 Mar 20;23(6):3252. doi: 10.3390/s23063252.
3
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning.虚拟对抗训练:一种用于监督学习和半监督学习的正则化方法。
IEEE Trans Pattern Anal Mach Intell. 2019 Aug;41(8):1979-1993. doi: 10.1109/TPAMI.2018.2858821. Epub 2018 Jul 23.
4
LRNAS: Differentiable Searching for Adversarially Robust Lightweight Neural Architecture.LRNAS:可微搜索对抗鲁棒轻量级神经架构
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5629-5643. doi: 10.1109/TNNLS.2024.3382724. Epub 2025 Feb 28.
5
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples.利用深度学习模型的认知不确定性来生成对抗样本。
Multimed Tools Appl. 2022;81(8):11479-11500. doi: 10.1007/s11042-022-12132-7. Epub 2022 Feb 18.
6
Adversarially robust neural networks with feature uncertainty learning and label embedding.基于特征不确定性学习和标签嵌入的对抗鲁棒神经网络。
Neural Netw. 2024 Apr;172:106087. doi: 10.1016/j.neunet.2023.12.041. Epub 2023 Dec 26.
7
GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization.GradDiv:通过梯度多样性正则化实现随机神经网络的对抗鲁棒性
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):2645-2651. doi: 10.1109/TPAMI.2022.3169217. Epub 2023 Jan 6.
8
Sinkhorn Adversarial Attack and Defense.Sinkhorn对抗攻击与防御
IEEE Trans Image Process. 2022;31:4039-4049. doi: 10.1109/TIP.2022.3180207. Epub 2022 Jun 14.
9
Generalizable and Discriminative Representations for Adversarially Robust Few-Shot Learning.用于对抗鲁棒少样本学习的可泛化和判别性表示
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5480-5493. doi: 10.1109/TNNLS.2024.3379172. Epub 2025 Feb 28.
10
Towards Adversarial Robustness for Multi-Mode Data through Metric Learning.通过度量学习实现多模态数据的对抗鲁棒性。
Sensors (Basel). 2023 Jul 5;23(13):6173. doi: 10.3390/s23136173.

本文引用的文献

1
Flat minima.平坦最小值
Neural Comput. 1997 Jan 1;9(1):1-42. doi: 10.1162/neco.1997.9.1.1.