Research Group on Intelligent Machines, National School of Engineers of sfax, B.P. W 3038, sfax, Tunisia.
Neural Netw. 2017 Nov;95:10-18. doi: 10.1016/j.neunet.2017.07.015. Epub 2017 Aug 8.
Deep Convolutional Neural Network (DCNN) can be marked as a powerful tool for object and image classification and retrieval. However, the training stage of such networks is highly consuming in terms of storage space and time. Also, the optimization is still a challenging subject. In this paper, we propose a fast DCNN based on Fast Wavelet Transform (FWT), intelligent dropout and layer skipping. The proposed approach led to improve the image retrieval accuracy as well as the searching time. This was possible thanks to three key advantages: First, the rapid way to compute the features using FWT. Second, the proposed intelligent dropout method is based on whether or not a unit is efficiently and not randomly selected. Third, it is possible to classify the image using efficient units of earlier layer(s) and skipping all the subsequent hidden layers directly to the output layer. Our experiments were performed on CIFAR-10 and MNIST datasets and the obtained results are very promising.
深度卷积神经网络(DCNN)可以被标记为一种用于对象和图像分类和检索的强大工具。然而,这种网络的训练阶段在存储空间和时间方面非常耗费。此外,优化仍然是一个具有挑战性的课题。在本文中,我们提出了一种基于快速小波变换(FWT)、智能丢弃和层跳跃的快速 DCNN。所提出的方法提高了图像检索的准确性和搜索时间。这要归功于三个关键优势:首先,使用 FWT 快速计算特征的方法。其次,所提出的智能丢弃方法基于单元是否被有效地而不是随机地选择。第三,可以使用前一层的有效单元对图像进行分类,并直接跳过所有后续的隐藏层,直接到输出层。我们的实验是在 CIFAR-10 和 MNIST 数据集上进行的,得到的结果非常有前景。