• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

BlockQNN:高效的分块神经网络架构生成。

BlockQNN: Efficient Block-Wise Neural Network Architecture Generation.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2021 Jul;43(7):2314-2328. doi: 10.1109/TPAMI.2020.2969193. Epub 2021 Jun 8.

DOI:10.1109/TPAMI.2020.2969193
PMID:31985407
Abstract

Convolutional neural networks have gained a remarkable success in computer vision. However, most popular network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained to choose component layers sequentially. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it yields state-of-the-art results in comparison to the hand-crafted networks on image classification, particularly, the best network generated by BlockQNN achieves 2.35 percent top-1 error rate on CIFAR-10. (2) it offers tremendous reduction of the search space in designing networks, spending only 3 days with 32 GPUs. A faster version can yield a comparable result with only 1 GPU in 20 hours. (3) it has strong generalizability in that the network built on CIFAR also performs well on the larger-scale dataset. The best network achieves very competitive accuracy of 82.0 percent top-1 and 96.0 percent top-5 on ImageNet.

摘要

卷积神经网络在计算机视觉领域取得了显著的成功。然而,大多数流行的网络架构都是手工制作的,通常需要专业知识和精心的设计。在本文中,我们提供了一个名为 BlockQNN 的分块网络生成管道,它使用带有 epsilon-贪婪探索策略的 Q-learning 范例自动构建高性能网络。最优的网络块是由学习代理构建的,该代理通过顺序选择组件层来进行训练。我们通过堆叠这些块来构建整个自动生成的网络。为了加速生成过程,我们还提出了一种分布式异步框架和一种提前停止策略。分块生成带来了独特的优势:(1)与图像分类的手工制作网络相比,它取得了最先进的结果,特别是,BlockQNN 生成的最佳网络在 CIFAR-10 上实现了 2.35%的 top-1 错误率。(2)它在设计网络时大大减少了搜索空间,仅使用 32 个 GPU 花费了 3 天时间。更快的版本仅在 20 小时内使用 1 个 GPU 就可以产生相当的结果。(3)它具有很强的通用性,即在 CIFAR 上构建的网络在更大规模的数据集上也能很好地表现。最佳网络在 ImageNet 上实现了非常有竞争力的精度,top-1 准确率为 82.0%,top-5 准确率为 96.0%。

相似文献

1
BlockQNN: Efficient Block-Wise Neural Network Architecture Generation.BlockQNN:高效的分块神经网络架构生成。
IEEE Trans Pattern Anal Mach Intell. 2021 Jul;43(7):2314-2328. doi: 10.1109/TPAMI.2020.2969193. Epub 2021 Jun 8.
2
Evolution of Deep Convolutional Neural Networks Using Cartesian Genetic Programming.基于笛卡尔遗传编程的深度卷积神经网络进化。
Evol Comput. 2020 Spring;28(1):141-163. doi: 10.1162/evco_a_00253. Epub 2019 Mar 22.
3
Deeply Supervised Block-Wise Neural Architecture Search.深度监督的逐块神经架构搜索
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):2451-2464. doi: 10.1109/TNNLS.2023.3347542. Epub 2025 Feb 6.
4
BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture.BNAS:使用广泛可扩展架构的高效神经架构搜索。
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):5004-5018. doi: 10.1109/TNNLS.2021.3067028. Epub 2022 Aug 31.
5
Evolutionary Shallowing Deep Neural Networks at Block Levels.在块级别上对深度神经网络进行演化。
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4635-4647. doi: 10.1109/TNNLS.2021.3059529. Epub 2022 Aug 31.
6
DiCENet: Dimension-Wise Convolutions for Efficient Networks.DiCENet:高效网络的维度卷积。
IEEE Trans Pattern Anal Mach Intell. 2022 May;44(5):2416-2425. doi: 10.1109/TPAMI.2020.3041871. Epub 2022 Apr 1.
7
ResDNet: Efficient Dense Multi-Scale Representations With Residual Learning for High-Level Vision Tasks.ResDNet:用于高级视觉任务的具有残差学习的高效密集多尺度表示
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):3904-3915. doi: 10.1109/TNNLS.2022.3169779. Epub 2025 Feb 28.
8
MIGO-NAS: Towards Fast and Generalizable Neural Architecture Search.MIGO-NAS:迈向快速且可泛化的神经架构搜索。
IEEE Trans Pattern Anal Mach Intell. 2021 Sep;43(9):2936-2952. doi: 10.1109/TPAMI.2021.3065138. Epub 2021 Aug 4.
9
Dynamical Conventional Neural Network Channel Pruning by Genetic Wavelet Channel Search for Image Classification.基于遗传小波通道搜索的动态传统神经网络通道剪枝用于图像分类
Front Comput Neurosci. 2021 Oct 27;15:760554. doi: 10.3389/fncom.2021.760554. eCollection 2021.
10
Improved Residual Network based on norm-preservation for visual recognition.基于范数保持的视觉识别改进残差网络。
Neural Netw. 2023 Jan;157:305-322. doi: 10.1016/j.neunet.2022.10.023. Epub 2022 Oct 28.

引用本文的文献

1
An Efficient Evolutionary Neural Architecture Search Algorithm Without Training.一种无需训练的高效进化神经架构搜索算法。
Biomimetics (Basel). 2025 Jun 29;10(7):421. doi: 10.3390/biomimetics10070421.
2
Pod-pose : an efficient top-down keypoint detection model for fine-grained pod phenotyping in mature soybean.Pod-pose:一种用于成熟大豆细粒度豆荚表型分析的高效自上而下关键点检测模型。
Plant Methods. 2025 Jun 9;21(1):82. doi: 10.1186/s13007-025-01399-0.
3
Evolutionary neural architecture search combining multi-branch ConvNet and improved transformer.
结合多分支卷积神经网络和改进型变压器的进化神经架构搜索
Sci Rep. 2023 Sep 22;13(1):15791. doi: 10.1038/s41598-023-42931-3.
4
A Novel Reinforcement Learning Approach for Spark Configuration Parameter Optimization.一种用于 Spark 配置参数优化的新型强化学习方法。
Sensors (Basel). 2022 Aug 8;22(15):5930. doi: 10.3390/s22155930.