Suppr超能文献

面向嵌入式平台视觉应用的量化友好型 MobileNet(QF-MobileNet)架构。

Quantization Friendly MobileNet (QF-MobileNet) Architecture for Vision Based Applications on Embedded Platforms.

机构信息

School of Computer Science & Engineering, KLE Technological University, Hubballi, India.

出版信息

Neural Netw. 2021 Apr;136:28-39. doi: 10.1016/j.neunet.2020.12.022. Epub 2020 Dec 29.

Abstract

Deep Neural Networks (DNNs) have become popular for various applications in the domain of image and computer vision due to their well-established performance attributes. DNN algorithms involve powerful multilevel feature extractions resulting in an extensive range of parameters and memory footprints. However, memory bandwidth requirements, memory footprint and the associated power consumption of models are issues to be addressed to deploy DNN models on embedded platforms for real time vision-based applications. We present an optimized DNN model for memory and accuracy for vision-based applications on embedded platforms. In this paper we propose Quantization Friendly MobileNet (QF-MobileNet) architecture. The architecture is optimized for inference accuracy and reduced resource utilization. The optimization is obtained by addressing the redundancy and quantization loss of the existing baseline MobileNet architectures. We verify and validate the performance of the QF-MobileNet architecture for image classification task on the ImageNet dataset. The proposed model is tested for inference accuracy and resource utilization and compared to the baseline MobileNet architecture. The inference accuracy of the proposed QF-MobileNetV2 float model attained 73.36% and the quantized model has 69.51%. The MobileNetV3 float model attained an inference accuracy of 68.75% and the quantized model has 67.5% respectively. The proposed model saves 33% of time complexity for QF-MobileNetV2 and QF-MobileNetV3 models against the baseline models. The QF-MobileNet also showed optimized resource utilization with 32% fewer tunable parameters, 30% fewer MAC's operations per image and reduced inference quantization loss by approximately 5% compared to the baseline models. The model is ported onto the android application using TensorFlow API. The android application performs inference on the native devices viz. smartphones, tablets and handheld devices. Future work is focused on introducing channel-wise and layer-wise quantization schemes to the proposed model. We intend to explore quantization aware training of DNN algorithms to achieve optimized resource utilization and inference accuracy.

摘要

深度神经网络 (DNN) 由于其性能卓越而在图像和计算机视觉领域的各种应用中广受欢迎。DNN 算法涉及强大的多级特征提取,从而导致参数和内存足迹广泛。然而,要在嵌入式平台上部署 DNN 模型以实现实时基于视觉的应用,就需要解决模型的内存带宽要求、内存占用和相关功耗问题。

我们提出了一种针对嵌入式平台上基于视觉的应用的内存和准确性优化的 DNN 模型。在本文中,我们提出了一种量化友好的 MobileNet (QF-MobileNet) 架构。该架构针对推理准确性和资源利用率进行了优化。通过解决现有基线 MobileNet 架构的冗余和量化损失来实现优化。我们在 ImageNet 数据集上针对图像分类任务验证和验证了 QF-MobileNet 架构的性能。我们测试了所提出模型的推理准确性和资源利用率,并与基线 MobileNet 架构进行了比较。所提出的 QF-MobileNetV2 浮点模型的推理准确性达到 73.36%,量化模型为 69.51%。MobileNetV3 浮点模型的推理准确性为 68.75%,量化模型为 67.5%。

与基线模型相比,所提出的 QF-MobileNetV2 和 QF-MobileNetV3 模型的时间复杂度分别节省了 33%。QF-MobileNet 还显示出优化的资源利用率,可调参数减少了 32%,每张图像的 MAC 操作减少了 30%,与基线模型相比,推理量化损失减少了约 5%。该模型使用 TensorFlow API 移植到了 Android 应用程序中。Android 应用程序在原生设备(例如智能手机、平板电脑和手持设备)上执行推理。未来的工作重点是将通道和层量化方案引入到所提出的模型中。我们打算探索 DNN 算法的量化感知训练,以实现优化的资源利用率和推理准确性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验