• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于几何感知的对抗式点云生成。

Geometry-Aware Generation of Adversarial Point Clouds.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Jun;44(6):2984-2999. doi: 10.1109/TPAMI.2020.3044712. Epub 2022 May 5.

DOI:10.1109/TPAMI.2020.3044712
PMID:33320808
Abstract

Machine learning models have been shown to be vulnerable to adversarial examples. While most of the existing methods for adversarial attack and defense work on the 2D image domain, a few recent attempts have been made to extend them to 3D point cloud data. However, adversarial results obtained by these methods typically contain point outliers, which are both noticeable and easy to defend against using the simple techniques of outlier removal. Motivated by the different mechanisms by which humans perceive 2D images and 3D shapes, in this paper we propose the new design of geometry-aware objectives, whose solutions favor (the discrete versions of) the desired surface properties of smoothness and fairness. To generate adversarial point clouds, we use a targeted attack misclassification loss that supports continuous pursuit of increasingly malicious signals. Regularizing the targeted attack loss with our proposed geometry-aware objectives results in our proposed method, Geometry-Aware Adversarial Attack ( GeoA). The results of GeoA tend to be more harmful, arguably harder to defend against, and of the key adversarial characterization of being imperceptible to humans. While the main focus of this paper is to learn to generate adversarial point clouds, we also present a simple but effective algorithm termed GeoA-IterNormPro, with Iterative Normal Projection (IterNorPro) that solves a new objective function GeoA, towards surface-level adversarial attacks via generation of adversarial point clouds. We quantitatively evaluate our methods on both synthetic and physical objects in terms of attack success rate and geometric regularity. For a qualitative evaluation, we conduct subjective studies by collecting human preferences from Amazon Mechanical Turk. Comparative results in comprehensive experiments confirm the advantages of our proposed methods. Our source codes are publicly available at https://github.com/Yuxin-Wen/GeoA3.

摘要

机器学习模型容易受到对抗样本的影响。虽然现有的大多数对抗攻击和防御方法都在 2D 图像领域进行,但是最近也有一些尝试将其扩展到 3D 点云数据。然而,这些方法得到的对抗结果通常包含点异常值,这些异常值不仅引人注目,而且很容易通过异常值去除等简单技术进行防御。受人类感知 2D 图像和 3D 形状的不同机制的启发,本文提出了新的几何感知目标设计,其解决方案有利于(平滑度和公平性等)所需曲面属性的离散版本。为了生成对抗性点云,我们使用有针对性的攻击错误分类损失,该损失支持对越来越恶意的信号进行连续追求。用我们提出的几何感知目标对有针对性的攻击损失进行正则化,得到了我们提出的方法,即几何感知对抗攻击(GeoA)。GeoA 的结果往往更具危害性,可以说更难防御,并且具有人类难以察觉的关键对抗特征。虽然本文的主要重点是学习生成对抗性点云,但我们还提出了一种简单但有效的算法,称为 GeoA-IterNormPro,它使用迭代法求解新的目标函数 GeoA,通过生成对抗性点云来实现表面级别的对抗攻击。我们根据攻击成功率和几何正则性在合成和物理物体上对我们的方法进行了定量评估。为了进行定性评估,我们通过从亚马逊 Mechanical Turk 收集人类偏好来进行主观研究。综合实验的比较结果证实了我们提出的方法的优势。我们的源代码可在 https://github.com/Yuxin-Wen/GeoA3 上公开获取。

相似文献

1
Geometry-Aware Generation of Adversarial Point Clouds.基于几何感知的对抗式点云生成。
IEEE Trans Pattern Anal Mach Intell. 2022 Jun;44(6):2984-2999. doi: 10.1109/TPAMI.2020.3044712. Epub 2022 May 5.
2
Imperceptible Transfer Attack and Defense on 3D Point Cloud Classification.三维点云分类中的隐形迁移攻击与防御。
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):4727-4746. doi: 10.1109/TPAMI.2022.3193449. Epub 2023 Mar 7.
3
Image Super-Resolution as a Defense Against Adversarial Attacks.图像超分辨率作为对抗对抗攻击的一种防御手段。
IEEE Trans Image Process. 2019 Sep 19. doi: 10.1109/TIP.2019.2940533.
4
LPF-Defense: 3D adversarial defense based on frequency analysis.LPF-Defense:基于频率分析的 3D 对抗防御。
PLoS One. 2023 Feb 6;18(2):e0271388. doi: 10.1371/journal.pone.0271388. eCollection 2023.
5
Learning defense transformations for counterattacking adversarial examples.学习防御变换以反击对抗样本。
Neural Netw. 2023 Jul;164:177-185. doi: 10.1016/j.neunet.2023.03.008. Epub 2023 Mar 24.
6
Vulnerability of classifiers to evolutionary generated adversarial examples.分类器对进化生成对抗样例的脆弱性。
Neural Netw. 2020 Jul;127:168-181. doi: 10.1016/j.neunet.2020.04.015. Epub 2020 Apr 20.
7
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning.一种用于概率对抗攻击和学习的哈密顿蒙特卡罗方法。
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):1725-1737. doi: 10.1109/TPAMI.2020.3032061. Epub 2022 Mar 4.
8
Adv-BDPM: Adversarial attack based on Boundary Diffusion Probability Model.Adv-BDPM:基于边界扩散概率模型的对抗攻击。
Neural Netw. 2023 Oct;167:730-740. doi: 10.1016/j.neunet.2023.08.048. Epub 2023 Sep 9.
9
Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.对抗攻击对医学影像分析系统的漏洞:未知因素。
Med Image Anal. 2021 Oct;73:102141. doi: 10.1016/j.media.2021.102141. Epub 2021 Jun 18.
10
SMGEA: A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories.SMGEA:一种由长期梯度记忆驱动的新型集成对抗攻击。
IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1051-1065. doi: 10.1109/TNNLS.2020.3039295. Epub 2022 Feb 28.

引用本文的文献

1
A Local Adversarial Attack with a Maximum Aggregated Region Sparseness Strategy for 3D Objects.一种针对3D物体的具有最大聚合区域稀疏性策略的局部对抗攻击。
J Imaging. 2025 Jan 13;11(1):25. doi: 10.3390/jimaging11010025.
2
LPF-Defense: 3D adversarial defense based on frequency analysis.LPF-Defense:基于频率分析的 3D 对抗防御。
PLoS One. 2023 Feb 6;18(2):e0271388. doi: 10.1371/journal.pone.0271388. eCollection 2023.