• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

LPF-Defense:基于频率分析的 3D 对抗防御。

LPF-Defense: 3D adversarial defense based on frequency analysis.

机构信息

Department of Computer Engineering, Sharif University of Technology, Tehran, Iran.

出版信息

PLoS One. 2023 Feb 6;18(2):e0271388. doi: 10.1371/journal.pone.0271388. eCollection 2023.

DOI:10.1371/journal.pone.0271388
PMID:36745627
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9901796/
Abstract

The 3D point clouds are increasingly being used in various application including safety-critical fields. It has recently been demonstrated that deep neural networks can successfully process 3D point clouds. However, these deep networks can be misclassified via 3D adversarial attacks intentionality designed to perturb some point cloud's features. These misclassifications may be due to the network's overreliance on features with unnecessary information in training sets. As such, identifying the features used by deep classifiers and removing features with unnecessary information from the training data can improve network's robustness against adversarial attacks. In this paper, the LPF-Defense framework is proposed to discard this unnecessary information from the training data by suppressing the high-frequency content in the training phase. Our analysis shows that adversarial perturbations are found in the high-frequency contents of adversarial point clouds. Experiments showed that the proposed defense method achieves the state-of-the-art defense performance against six adversarial attacks on PointNet, PointNet++, and DGCNN models. The findings are practically supported by an expansive evaluation of synthetic (ModelNet40 and ShapeNet) and real datasets (ScanObjectNN). In particular, improvements are achieved with an average increase of classification accuracy by 3.8% on Drop100 attack and 4.26% on Drop200 attack compared to the state-of-the-art methods. The method also improves models' accuracy on the original dataset compared to other available methods. (To facilitate research in this area, an open-source implementation of the method and data is released at https://github.com/kimianoorbakhsh/LPF-Defense.).

摘要

三维点云在各种应用中越来越多地被使用,包括安全关键领域。最近已经证明,深度神经网络可以成功地处理三维点云。然而,这些深度网络可能会被有意设计来扰乱某些点云特征的三维对抗攻击错误分类。这些错误分类可能是由于网络过度依赖于训练集中带有不必要信息的特征。因此,识别深度分类器使用的特征并从训练数据中删除带有不必要信息的特征,可以提高网络对对抗攻击的鲁棒性。在本文中,提出了 LPF-Defense 框架,通过在训练阶段抑制训练数据中的高频内容,从训练数据中丢弃这些不必要的信息。我们的分析表明,对抗扰动存在于对抗点云中的高频内容中。实验表明,所提出的防御方法在针对 PointNet、PointNet++和 DGCNN 模型的六种对抗攻击中实现了最先进的防御性能。通过对合成数据集(ModelNet40 和 ShapeNet)和真实数据集(ScanObjectNN)的广泛评估,实验结果得到了实际支持。特别是,与最先进的方法相比,在 Drop100 攻击中平均提高了 3.8%的分类准确率,在 Drop200 攻击中平均提高了 4.26%的分类准确率。与其他可用方法相比,该方法还提高了模型在原始数据集上的准确性。(为了促进该领域的研究,该方法的开源实现和数据已在 https://github.com/kimianoorbakhsh/LPF-Defense. 上发布。)。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/71877bebaf68/pone.0271388.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/47dc5c3ac50a/pone.0271388.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/ae76f4da4fa7/pone.0271388.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/3dfcc590abc7/pone.0271388.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/762afb0945c7/pone.0271388.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/e7eea154958e/pone.0271388.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/6f3c24dc82ec/pone.0271388.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/09754e8ff0cf/pone.0271388.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/835f65b08e9e/pone.0271388.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/71877bebaf68/pone.0271388.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/47dc5c3ac50a/pone.0271388.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/ae76f4da4fa7/pone.0271388.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/3dfcc590abc7/pone.0271388.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/762afb0945c7/pone.0271388.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/e7eea154958e/pone.0271388.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/6f3c24dc82ec/pone.0271388.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/09754e8ff0cf/pone.0271388.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/835f65b08e9e/pone.0271388.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/71877bebaf68/pone.0271388.g009.jpg

相似文献

1
LPF-Defense: 3D adversarial defense based on frequency analysis.LPF-Defense:基于频率分析的 3D 对抗防御。
PLoS One. 2023 Feb 6;18(2):e0271388. doi: 10.1371/journal.pone.0271388. eCollection 2023.
2
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.通过注意力机制和对抗性逻辑对配对提高对抗鲁棒性
Front Artif Intell. 2022 Jan 27;4:752831. doi: 10.3389/frai.2021.752831. eCollection 2021.
3
Geometry-Aware Generation of Adversarial Point Clouds.基于几何感知的对抗式点云生成。
IEEE Trans Pattern Anal Mach Intell. 2022 Jun;44(6):2984-2999. doi: 10.1109/TPAMI.2020.3044712. Epub 2022 May 5.
4
Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.增强视频识别模型的鲁棒性:稀疏对抗攻击及其他。
Neural Netw. 2024 Mar;171:127-143. doi: 10.1016/j.neunet.2023.11.056. Epub 2023 Nov 25.
5
Learning defense transformations for counterattacking adversarial examples.学习防御变换以反击对抗样本。
Neural Netw. 2023 Jul;164:177-185. doi: 10.1016/j.neunet.2023.03.008. Epub 2023 Mar 24.
6
Self-Supervised Learning for Point-Cloud Classification by a Multigrid Autoencoder.基于多网格自动编码器的点云分类自监督学习
Sensors (Basel). 2022 Oct 23;22(21):8115. doi: 10.3390/s22218115.
7
Towards evaluating the robustness of deep diagnostic models by adversarial attack.通过对抗攻击评估深度诊断模型的稳健性。
Med Image Anal. 2021 Apr;69:101977. doi: 10.1016/j.media.2021.101977. Epub 2021 Jan 22.
8
Image Super-Resolution as a Defense Against Adversarial Attacks.图像超分辨率作为对抗对抗攻击的一种防御手段。
IEEE Trans Image Process. 2019 Sep 19. doi: 10.1109/TIP.2019.2940533.
9
Adversarial Attack and Defense in Deep Ranking.深度排序中的对抗攻击与防御
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5306-5324. doi: 10.1109/TPAMI.2024.3365699. Epub 2024 Jul 2.
10
Uni-image: Universal image construction for robust neural model.Uni-image:用于稳健神经模型的通用图像构建。
Neural Netw. 2020 Aug;128:279-287. doi: 10.1016/j.neunet.2020.05.018. Epub 2020 May 21.

本文引用的文献

1
Imperceptible Transfer Attack and Defense on 3D Point Cloud Classification.三维点云分类中的隐形迁移攻击与防御。
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):4727-4746. doi: 10.1109/TPAMI.2022.3193449. Epub 2023 Mar 7.
2
Geometry-Aware Generation of Adversarial Point Clouds.基于几何感知的对抗式点云生成。
IEEE Trans Pattern Anal Mach Intell. 2022 Jun;44(6):2984-2999. doi: 10.1109/TPAMI.2020.3044712. Epub 2022 May 5.
3
Point Cloud Denoising via Feature Graph Laplacian Regularization.基于特征图拉普拉斯正则化的点云去噪
IEEE Trans Image Process. 2020 Jan 30. doi: 10.1109/TIP.2020.2969052.
4
DGCNN: A convolutional neural network over large-scale labeled graphs.DGCNN:一种用于大规模有标签图的卷积神经网络。
Neural Netw. 2018 Dec;108:533-543. doi: 10.1016/j.neunet.2018.09.001. Epub 2018 Sep 21.
5
Deep learning for healthcare: review, opportunities and challenges.深度学习在医疗保健领域的应用:综述、机遇与挑战。
Brief Bioinform. 2018 Nov 27;19(6):1236-1246. doi: 10.1093/bib/bbx044.