Feng Yuqi, Lv Zeqiong, Chen Hongyang, Gao Shangce, An Fengping, Sun Yanan
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5629-5643. doi: 10.1109/TNNLS.2024.3382724. Epub 2025 Feb 28.
The adversarial robustness is critical to deep neural networks (DNNs) in deployment. However, the improvement of adversarial robustness often requires compromising with the network size. Existing approaches to addressing this problem mainly focus on the combination of model compression and adversarial training. However, their performance heavily relies on neural architectures, which are typically manual designs with extensive expertise. In this article, we propose a lightweight and robust neural architecture search (LRNAS) method to automatically search for adversarially robust lightweight neural architectures. Specifically, we propose a novel search strategy to quantify contributions of the components in the search space, based on which the beneficial components can be determined. In addition, we further propose an architecture selection method based on a greedy strategy, which can keep the model size while deriving sufficient beneficial components. Owing to these designs in LRNAS, the lightness, the natural accuracy, and the adversarial robustness can be collectively guaranteed to the searched architectures. We conduct extensive experiments on various benchmark datasets against the state of the arts. The experimental results demonstrate that the proposed LRNAS method is superior at finding lightweight neural architectures that are both accurate and adversarially robust under popular adversarial attacks. Moreover, ablation studies are also performed, which reveals the validity of the individual components designed in LRNAS and the component effects in positively deciding the overall performance.
对抗鲁棒性对于深度神经网络(DNN)在实际部署中至关重要。然而,提高对抗鲁棒性往往需要在网络规模上做出妥协。现有的解决此问题的方法主要集中在模型压缩和对抗训练的结合上。然而,它们的性能严重依赖于神经架构,而这些架构通常是需要广泛专业知识的人工设计。在本文中,我们提出了一种轻量级且鲁棒的神经架构搜索(LRNAS)方法,以自动搜索对抗鲁棒的轻量级神经架构。具体而言,我们提出了一种新颖的搜索策略来量化搜索空间中组件的贡献,据此可以确定有益的组件。此外,我们进一步提出了一种基于贪婪策略的架构选择方法,该方法可以在保持模型规模的同时获得足够的有益组件。由于LRNAS中的这些设计,可以共同保证搜索到的架构的轻量级、自然精度和对抗鲁棒性。我们针对各种基准数据集与现有技术进行了广泛的实验。实验结果表明,所提出的LRNAS方法在寻找在流行对抗攻击下既准确又具有对抗鲁棒性的轻量级神经架构方面具有优势。此外,我们还进行了消融研究,揭示了LRNAS中设计的各个组件的有效性以及这些组件对整体性能的积极影响。