College of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China.
Yiyang Branch, China Telecom Co., Ltd., Yiyang 413000, China.
Comput Intell Neurosci. 2020 Aug 1;2020:8817849. doi: 10.1155/2020/8817849. eCollection 2020.
Because deep neural networks (DNNs) are both memory-intensive and computation-intensive, they are difficult to apply to embedded systems with limited hardware resources. Therefore, DNN models need to be compressed and accelerated. By applying depthwise separable convolutions, MobileNet can decrease the number of parameters and computational complexity with less loss of classification precision. Based on MobileNet, 3 improved MobileNet models with local receptive field expansion in shallow layers, also called Dilated-MobileNet (Dilated Convolution MobileNet) models, are proposed, in which dilated convolutions are introduced into a specific convolutional layer of the MobileNet model. Without increasing the number of parameters, dilated convolutions are used to increase the receptive field of the convolution filters to obtain better classification accuracy. The experiments were performed on the Caltech-101, Caltech-256, and Tubingen animals with attribute datasets, respectively. The results show that Dilated-MobileNets can obtain up to 2% higher classification accuracy than MobileNet.
由于深度神经网络(DNN)既需要大量的内存,又需要大量的计算,因此难以应用于硬件资源有限的嵌入式系统。因此,需要对 DNN 模型进行压缩和加速。通过应用深度可分离卷积,MobileNet 可以在不损失分类精度的情况下减少参数数量和计算复杂度。基于 MobileNet,提出了 3 种改进的浅层局部感受野扩展的 MobileNet 模型,也称为扩张 MobileNet(扩张卷积 MobileNet)模型,在这些模型中,扩张卷积被引入到 MobileNet 模型的特定卷积层中。通过使用扩张卷积而不增加参数数量,可以增加卷积滤波器的感受野,从而获得更好的分类精度。在 Caltech-101、Caltech-256 和 Tubingen 动物属性数据集上进行了实验。结果表明,与 MobileNet 相比,扩张 MobileNet 可以获得高达 2%的更高分类精度。