IEEE Trans Pattern Anal Mach Intell. 2021 Jul;43(7):2314-2328. doi: 10.1109/TPAMI.2020.2969193. Epub 2021 Jun 8.
Convolutional neural networks have gained a remarkable success in computer vision. However, most popular network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained to choose component layers sequentially. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it yields state-of-the-art results in comparison to the hand-crafted networks on image classification, particularly, the best network generated by BlockQNN achieves 2.35 percent top-1 error rate on CIFAR-10. (2) it offers tremendous reduction of the search space in designing networks, spending only 3 days with 32 GPUs. A faster version can yield a comparable result with only 1 GPU in 20 hours. (3) it has strong generalizability in that the network built on CIFAR also performs well on the larger-scale dataset. The best network achieves very competitive accuracy of 82.0 percent top-1 and 96.0 percent top-5 on ImageNet.
卷积神经网络在计算机视觉领域取得了显著的成功。然而,大多数流行的网络架构都是手工制作的,通常需要专业知识和精心的设计。在本文中,我们提供了一个名为 BlockQNN 的分块网络生成管道,它使用带有 epsilon-贪婪探索策略的 Q-learning 范例自动构建高性能网络。最优的网络块是由学习代理构建的,该代理通过顺序选择组件层来进行训练。我们通过堆叠这些块来构建整个自动生成的网络。为了加速生成过程,我们还提出了一种分布式异步框架和一种提前停止策略。分块生成带来了独特的优势:(1)与图像分类的手工制作网络相比,它取得了最先进的结果,特别是,BlockQNN 生成的最佳网络在 CIFAR-10 上实现了 2.35%的 top-1 错误率。(2)它在设计网络时大大减少了搜索空间,仅使用 32 个 GPU 花费了 3 天时间。更快的版本仅在 20 小时内使用 1 个 GPU 就可以产生相当的结果。(3)它具有很强的通用性,即在 CIFAR 上构建的网络在更大规模的数据集上也能很好地表现。最佳网络在 ImageNet 上实现了非常有竞争力的精度,top-1 准确率为 82.0%,top-5 准确率为 96.0%。