Suppr超能文献

LPF-Defense:基于频率分析的 3D 对抗防御。

LPF-Defense: 3D adversarial defense based on frequency analysis.

机构信息

Department of Computer Engineering, Sharif University of Technology, Tehran, Iran.

出版信息

PLoS One. 2023 Feb 6;18(2):e0271388. doi: 10.1371/journal.pone.0271388. eCollection 2023.

Abstract

The 3D point clouds are increasingly being used in various application including safety-critical fields. It has recently been demonstrated that deep neural networks can successfully process 3D point clouds. However, these deep networks can be misclassified via 3D adversarial attacks intentionality designed to perturb some point cloud's features. These misclassifications may be due to the network's overreliance on features with unnecessary information in training sets. As such, identifying the features used by deep classifiers and removing features with unnecessary information from the training data can improve network's robustness against adversarial attacks. In this paper, the LPF-Defense framework is proposed to discard this unnecessary information from the training data by suppressing the high-frequency content in the training phase. Our analysis shows that adversarial perturbations are found in the high-frequency contents of adversarial point clouds. Experiments showed that the proposed defense method achieves the state-of-the-art defense performance against six adversarial attacks on PointNet, PointNet++, and DGCNN models. The findings are practically supported by an expansive evaluation of synthetic (ModelNet40 and ShapeNet) and real datasets (ScanObjectNN). In particular, improvements are achieved with an average increase of classification accuracy by 3.8% on Drop100 attack and 4.26% on Drop200 attack compared to the state-of-the-art methods. The method also improves models' accuracy on the original dataset compared to other available methods. (To facilitate research in this area, an open-source implementation of the method and data is released at https://github.com/kimianoorbakhsh/LPF-Defense.).

摘要

三维点云在各种应用中越来越多地被使用,包括安全关键领域。最近已经证明,深度神经网络可以成功地处理三维点云。然而,这些深度网络可能会被有意设计来扰乱某些点云特征的三维对抗攻击错误分类。这些错误分类可能是由于网络过度依赖于训练集中带有不必要信息的特征。因此,识别深度分类器使用的特征并从训练数据中删除带有不必要信息的特征,可以提高网络对对抗攻击的鲁棒性。在本文中,提出了 LPF-Defense 框架,通过在训练阶段抑制训练数据中的高频内容,从训练数据中丢弃这些不必要的信息。我们的分析表明,对抗扰动存在于对抗点云中的高频内容中。实验表明,所提出的防御方法在针对 PointNet、PointNet++和 DGCNN 模型的六种对抗攻击中实现了最先进的防御性能。通过对合成数据集(ModelNet40 和 ShapeNet)和真实数据集(ScanObjectNN)的广泛评估,实验结果得到了实际支持。特别是,与最先进的方法相比,在 Drop100 攻击中平均提高了 3.8%的分类准确率,在 Drop200 攻击中平均提高了 4.26%的分类准确率。与其他可用方法相比,该方法还提高了模型在原始数据集上的准确性。(为了促进该领域的研究,该方法的开源实现和数据已在 https://github.com/kimianoorbakhsh/LPF-Defense. 上发布。)。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a743/9901796/47dc5c3ac50a/pone.0271388.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验