School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China.
Sensors (Basel). 2021 Jan 1;21(1):229. doi: 10.3390/s21010229.
The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user's mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions.
深度学习网络(DNN)的最新成果极大地提高了各种智能应用的准确性和性能。然而,在资源受限的移动设备上运行这种计算密集型的基于 DNN 的应用程序肯定会导致长时间的延迟和巨大的能量消耗。传统的方法是在中央云中执行 DNN,但它需要大量的数据通过无线网络传输到云端,这也会导致长时间的延迟。为了解决这个问题,已经提出将部分 DNN 计算卸载到边缘云中,以实现移动设备和边缘云之间的协同执行。此外,移动设备的移动性很容易导致计算卸载失败。在本文中,我们开发了一种包含移动性的 DNN 分区卸载算法(MDPO),以适应用户的移动性。MDPO 的目标是在移动用户移动时最小化完成 DNN 作业的总延迟。MDPO 算法适用于具有链式拓扑和图形拓扑的 DNN。我们将我们提出的 MDPO 的性能与本地执行和边缘执行进行了比较,实验表明 MDPO 显著降低了总延迟并提高了 DNN 的性能,并且 MDPO 可以很好地适应不同的网络条件。