• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

单次神经架构搜索:通过最大化多样性克服灾难性遗忘。

One-Shot Neural Architecture Search: Maximising Diversity to Overcome Catastrophic Forgetting.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2021 Sep;43(9):2921-2935. doi: 10.1109/TPAMI.2020.3035351. Epub 2021 Aug 4.

DOI:10.1109/TPAMI.2020.3035351
PMID:33147140
Abstract

One-shot neural architecture search (NAS) has recently become mainstream in the NAS community because it significantly improves computational efficiency through weight sharing. However, the supernet training paradigm in one-shot NAS introduces catastrophic forgetting, where each step of the training can deteriorate the performance of other architectures that contain partially-shared weights with current architecture. To overcome this problem of catastrophic forgetting, we formulate supernet training for one-shot NAS as a constrained continual learning optimization problem such that learning the current architecture does not degrade the validation accuracy of previous architectures. The key to solving this constrained optimization problem is a novelty search based architecture selection (NSAS) loss function that regularizes the supernet training by using a greedy novelty search method to find the most representative subset. We applied the NSAS loss function to two one-shot NAS baselines and extensively tested them on both a common search space and a NAS benchmark dataset. We further derive three variants based on the NSAS loss function, the NSAS with depth constrain (NSAS-C) to improve the transferability, and NSAS-G and NSAS-LG to handle the situation with a limited number of constraints. The experiments on the common NAS search space demonstrate that NSAS and it variants improve the predictive ability of supernet training in one-shot NAS with remarkable and efficient performance on the CIFAR-10, CIFAR-100, and ImageNet datasets. The results with the NAS benchmark dataset also confirm the significant improvements these one-shot NAS baselines can make.

摘要

单次神经架构搜索 (NAS) 最近在 NAS 社区中成为主流,因为它通过权重共享显著提高了计算效率。然而,单次 NAS 中的超网训练范例引入了灾难性遗忘,其中训练的每一步都可能降低与当前架构部分共享权重的其他架构的性能。为了克服这种灾难性遗忘问题,我们将单次 NAS 中的超网训练表述为一个受约束的持续学习优化问题,以便学习当前架构不会降低先前架构的验证准确性。解决这个受约束的优化问题的关键是基于新颖性搜索的架构选择 (NSAS) 损失函数,该函数通过使用贪婪新颖性搜索方法找到最具代表性的子集来正则化超网训练。我们将 NSAS 损失函数应用于两个单次 NAS 基线,并在常见搜索空间和 NAS 基准数据集上对它们进行了广泛测试。我们进一步基于 NSAS 损失函数推导出了三个变体,即具有深度约束的 NSAS (NSAS-C),以提高可转移性,以及 NSAS-G 和 NSAS-LG,以处理约束数量有限的情况。在常见的 NAS 搜索空间上的实验表明,NSAS 及其变体提高了单次 NAS 中超网训练的预测能力,在 CIFAR-10、CIFAR-100 和 ImageNet 数据集上具有显著且高效的性能。在 NAS 基准数据集上的结果也证实了这些单次 NAS 基线可以显著提高性能。

相似文献

1
One-Shot Neural Architecture Search: Maximising Diversity to Overcome Catastrophic Forgetting.单次神经架构搜索:通过最大化多样性克服灾难性遗忘。
IEEE Trans Pattern Anal Mach Intell. 2021 Sep;43(9):2921-2935. doi: 10.1109/TPAMI.2020.3035351. Epub 2021 Aug 4.
2
One-Shot Neural Architecture Search by Dynamically Pruning Supernet in Hierarchical Order.分层动态剪枝超网的单步神经架构搜索。
Int J Neural Syst. 2021 Jul;31(7):2150029. doi: 10.1142/S0129065721500295. Epub 2021 Jun 14.
3
MNGNAS: Distilling Adaptive Combination of Multiple Searched Networks for One-Shot Neural Architecture Search.MNGNAS:用于一次性神经架构搜索的多个搜索网络的自适应组合提取
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13489-13508. doi: 10.1109/TPAMI.2023.3293885. Epub 2023 Oct 3.
4
Deeply Supervised Block-Wise Neural Architecture Search.深度监督的逐块神经架构搜索
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):2451-2464. doi: 10.1109/TNNLS.2023.3347542. Epub 2025 Feb 6.
5
You Only Search Once: Single Shot Neural Architecture Search via Direct Sparse Optimization.你只需搜索一次:通过直接稀疏优化的单镜头神经架构搜索。
IEEE Trans Pattern Anal Mach Intell. 2021 Sep;43(9):2891-2904. doi: 10.1109/TPAMI.2020.3020300. Epub 2021 Aug 4.
6
Disturbance-immune weight sharing for neural architecture search.抗干扰权重共享的神经架构搜索。
Neural Netw. 2021 Dec;144:553-564. doi: 10.1016/j.neunet.2021.09.002. Epub 2021 Sep 23.
7
Point-NAS: A Novel Neural Architecture Search Framework for Point Cloud Analysis.Point-NAS:一种用于点云分析的新型神经架构搜索框架。
IEEE Trans Image Process. 2023;32:6526-6542. doi: 10.1109/TIP.2023.3331223. Epub 2023 Dec 1.
8
MIGO-NAS: Towards Fast and Generalizable Neural Architecture Search.MIGO-NAS:迈向快速且可泛化的神经架构搜索。
IEEE Trans Pattern Anal Mach Intell. 2021 Sep;43(9):2936-2952. doi: 10.1109/TPAMI.2021.3065138. Epub 2021 Aug 4.
9
A Gradient-Guided Evolutionary Neural Architecture Search.一种梯度引导的进化神经网络架构搜索。
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4345-4357. doi: 10.1109/TNNLS.2024.3371432. Epub 2025 Feb 28.
10
Towards a configurable and non-hierarchical search space for NAS.面向 NAS 的可配置且非分层搜索空间。
Neural Netw. 2024 Dec;180:106700. doi: 10.1016/j.neunet.2024.106700. Epub 2024 Sep 3.

引用本文的文献

1
Heterogeneity-Aware Personalized Federated Neural Architecture Search.异构感知个性化联邦神经架构搜索
Entropy (Basel). 2025 Jul 16;27(7):759. doi: 10.3390/e27070759.