• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于深度神经网络的明确且高保真的后门水印

Unambiguous and High-Fidelity Backdoor Watermarking for Deep Neural Networks.

作者信息

Hua Guang, Teoh Andrew Beng Jin, Xiang Yong, Jiang Hao

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11204-11217. doi: 10.1109/TNNLS.2023.3250210. Epub 2024 Aug 5.

DOI:10.1109/TNNLS.2023.3250210
PMID:37028031
Abstract

The unprecedented success of deep learning could not be achieved without the synergy of big data, computing power, and human knowledge, among which none is free. This calls for the copyright protection of deep neural networks (DNNs), which has been tackled via DNN watermarking. Due to the special structure of DNNs, backdoor watermarks have been one of the popular solutions. In this article, we first present a big picture of DNN watermarking scenarios with rigorous definitions unifying the black- and white-box concepts across watermark embedding, attack, and verification phases. Then, from the perspective of data diversity, especially adversarial and open set examples overlooked in the existing works, we rigorously reveal the vulnerability of backdoor watermarks against black-box ambiguity attacks. To solve this problem, we propose an unambiguous backdoor watermarking scheme via the design of deterministically dependent trigger samples and labels, showing that the cost of ambiguity attacks will increase from the existing linear complexity to exponential complexity. Furthermore, noting that the existing definition of backdoor fidelity is solely concerned with classification accuracy, we propose to more rigorously evaluate fidelity via examining training data feature distributions and decision boundaries before and after backdoor embedding. Incorporating the proposed prototype guided regularizer (PGR) and fine-tune all layers (FTAL) strategy, we show that backdoor fidelity can be substantially improved. Experimental results using two versions of the basic ResNet18, advanced wide residual network (WRN28_10) and EfficientNet-B0, on MNIST, CIFAR-10, CIFAR-100, and FOOD-101 classification tasks, respectively, illustrate the advantages of the proposed method.

摘要

深度学习取得的空前成功离不开大数据、计算能力和人类知识的协同作用,而这些都不是免费可得的。这就需要对深度神经网络(DNN)进行版权保护,而这一问题已通过DNN水印技术得以解决。由于DNN的特殊结构,后门水印一直是一种流行的解决方案。在本文中,我们首先呈现了DNN水印场景的全景,给出了严谨的定义,统一了水印嵌入、攻击和验证阶段的黑盒与白盒概念。然后,从数据多样性的角度,特别是现有工作中被忽视的对抗性和开放集示例,我们严格揭示了后门水印在面对黑盒模糊攻击时的脆弱性。为了解决这个问题,我们通过设计确定性相关的触发样本和标签,提出了一种明确的后门水印方案,表明模糊攻击的成本将从现有的线性复杂度增加到指数复杂度。此外,注意到现有的后门保真度定义仅关注分类准确率,我们建议通过检查后门嵌入前后的训练数据特征分布和决策边界来更严格地评估保真度。结合所提出的原型引导正则化器(PGR)和全层微调(FTAL)策略,我们表明可以显著提高后门保真度。分别在MNIST、CIFAR-10、CIFAR-100和FOOD-101分类任务上使用两个版本的基本ResNet18、高级宽残差网络(WRN28_10)和EfficientNet-B0进行的实验结果,说明了所提方法的优势。

相似文献

1
Unambiguous and High-Fidelity Backdoor Watermarking for Deep Neural Networks.用于深度神经网络的明确且高保真的后门水印
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11204-11217. doi: 10.1109/TNNLS.2023.3250210. Epub 2024 Aug 5.
2
SecureNet: Proactive intellectual property protection and model security defense for DNNs based on backdoor learning.SecureNet:基于后门学习的 DNN 主动式知识产权保护和模型安全防御
Neural Netw. 2024 Jun;174:106199. doi: 10.1016/j.neunet.2024.106199. Epub 2024 Feb 21.
3
Detection of Backdoors in Trained Classifiers Without Access to the Training Set.在无法访问训练集的情况下检测训练分类器中的后门。
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1177-1191. doi: 10.1109/TNNLS.2020.3041202. Epub 2022 Feb 28.
4
Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.联邦生成对抗网络的后门攻击与防御在医学图像合成中的应用。
Med Image Anal. 2023 Dec;90:102965. doi: 10.1016/j.media.2023.102965. Epub 2023 Sep 22.
5
A Textual Backdoor Defense Method Based on Deep Feature Classification.一种基于深度特征分类的文本后门防御方法。
Entropy (Basel). 2023 Jan 23;25(2):220. doi: 10.3390/e25020220.
6
Critical Path-Based Backdoor Detection for Deep Neural Networks.基于关键路径的深度神经网络后门检测
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):4032-4046. doi: 10.1109/TNNLS.2022.3201586. Epub 2024 Feb 29.
7
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks.迈向抵御后门攻击和对抗性攻击的统一鲁棒性。
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):7589-7605. doi: 10.1109/TPAMI.2024.3392760. Epub 2024 Nov 6.
8
WFB: watermarking-based copyright protection framework for federated learning model via blockchain.WFB:基于水印的联邦学习模型区块链版权保护框架
Sci Rep. 2024 Aug 21;14(1):19453. doi: 10.1038/s41598-024-70025-1.
9
Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface.图像传感器接口故障注入攻击引发的深度神经网络后门攻击。
Sensors (Basel). 2023 May 14;23(10):4742. doi: 10.3390/s23104742.
10
Backdoor Attack against Face Sketch Synthesis.针对面部草图合成的后门攻击。
Entropy (Basel). 2023 Jun 25;25(7):974. doi: 10.3390/e25070974.