• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

多域主动防御:在没有干净数据集的情况下,通过 ALL-to-ALL 去耦训练检测多域后门中毒样本。

Multidomain active defense: Detecting multidomain backdoor poisoned samples via ALL-to-ALL decoupling training without clean datasets.

机构信息

School of Computer Science, South-Central Min Zu University, Wuhan 430074, China.

School of Computer Science, South-Central Min Zu University, Wuhan 430074, China.

出版信息

Neural Netw. 2023 Nov;168:350-362. doi: 10.1016/j.neunet.2023.09.036. Epub 2023 Sep 25.

DOI:10.1016/j.neunet.2023.09.036
PMID:37797397
Abstract

Deep learning is vulnerable to backdoor poisoning attacks in which an attacker can easily embed a hidden backdoor into a trained model by injecting poisoned samples into the training set. Many prior state-of-the-art techniques for detecting backdoor poisoning attacks are based on a potential separability assumption. However, current adaptive poisoning strategies can significantly reduce 'distinguishable behavior', making most prior state-of-the-art techniques less effective. In addition, we note that existing detection methods are not practical for multidomain datasets and may leak user privacy because they require and collect clean samples. To address the above issues, we propose a multidomain active defense approach that does not use clean datasets. The proposed approach can generate diverse clean samples from different domains and decouple neural networks round by round using clean samples to disassociate features and labels, making backdoor poisoned samples easier to detect without fitting clean samples. We demonstrate the advantage of our approach through an extensive evaluation of CIFAR10, CelebA, MNIST & MNIST-M, MNIST & USPS & MNIST-M, MNIST & USPS & SVHN and CIFAR10 & Tiny-ImageNet.

摘要

深度学习容易受到后门中毒攻击的影响,攻击者可以通过向训练集中注入有毒样本,轻松地将隐藏的后门嵌入到训练好的模型中。许多之前的最先进的后门中毒攻击检测技术都是基于潜在的可分离性假设。然而,当前的自适应中毒策略可以显著降低“可区分行为”,使得大多数之前的最先进技术效果不佳。此外,我们注意到现有的检测方法不适用于多域数据集,并且可能会泄露用户隐私,因为它们需要并收集干净的样本。为了解决上述问题,我们提出了一种不使用干净数据集的多域主动防御方法。所提出的方法可以从不同的域中生成多样化的干净样本,并使用干净样本逐轮解耦神经网络,将特征和标签分离,从而更容易检测到后门中毒样本,而无需拟合干净样本。我们通过对 CIFAR10、CelebA、MNIST&MNIST-M、MNIST&USPS&MNIST-M、MNIST&USPS&SVHN 和 CIFAR10&Tiny-ImageNet 的广泛评估,展示了我们方法的优势。

相似文献

1
Multidomain active defense: Detecting multidomain backdoor poisoned samples via ALL-to-ALL decoupling training without clean datasets.多域主动防御:在没有干净数据集的情况下,通过 ALL-to-ALL 去耦训练检测多域后门中毒样本。
Neural Netw. 2023 Nov;168:350-362. doi: 10.1016/j.neunet.2023.09.036. Epub 2023 Sep 25.
2
SecureNet: Proactive intellectual property protection and model security defense for DNNs based on backdoor learning.SecureNet:基于后门学习的 DNN 主动式知识产权保护和模型安全防御
Neural Netw. 2024 Jun;174:106199. doi: 10.1016/j.neunet.2024.106199. Epub 2024 Feb 21.
3
Poison Ink: Robust and Invisible Backdoor Attack.毒墨:稳健且不可见的后门攻击
IEEE Trans Image Process. 2022;31:5691-5705. doi: 10.1109/TIP.2022.3201472. Epub 2022 Sep 2.
4
Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork.诱捕与替换:通过将后门攻击诱捕到易于替换的子网中来防御后门攻击
Adv Neural Inf Process Syst. 2022 Dec;35:36026-36039.
5
Detection of Backdoors in Trained Classifiers Without Access to the Training Set.在无法访问训练集的情况下检测训练分类器中的后门。
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1177-1191. doi: 10.1109/TNNLS.2020.3041202. Epub 2022 Feb 28.
6
Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.联邦生成对抗网络的后门攻击与防御在医学图像合成中的应用。
Med Image Anal. 2023 Dec;90:102965. doi: 10.1016/j.media.2023.102965. Epub 2023 Sep 22.
7
IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions.IBD:一种基于多元交互的可解释后门检测方法。
Sensors (Basel). 2022 Nov 10;22(22):8697. doi: 10.3390/s22228697.
8
A Textual Backdoor Defense Method Based on Deep Feature Classification.一种基于深度特征分类的文本后门防御方法。
Entropy (Basel). 2023 Jan 23;25(2):220. doi: 10.3390/e25020220.
9
Towards Adversarial Robustness for Multi-Mode Data through Metric Learning.通过度量学习实现多模态数据的对抗鲁棒性。
Sensors (Basel). 2023 Jul 5;23(13):6173. doi: 10.3390/s23136173.
10
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks.迈向抵御后门攻击和对抗性攻击的统一鲁棒性。
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):7589-7605. doi: 10.1109/TPAMI.2024.3392760. Epub 2024 Nov 6.