Suppr超能文献

面向感兴趣区域的服务缓存辅助边缘蜂窝网络中的卸载和资源分配优化。

AoI-Aware Optimization of Service Caching-Assisted Offloading and Resource Allocation in Edge Cellular Networks.

机构信息

The Guangdong Key Laboratory of Information Security Technology, School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China.

出版信息

Sensors (Basel). 2023 Mar 21;23(6):3306. doi: 10.3390/s23063306.

Abstract

The rapid development of the Internet of Things (IoT) has led to computational offloading at the edge; this is a promising paradigm for achieving intelligence everywhere. As offloading can lead to more traffic in cellular networks, cache technology is used to alleviate the channel burden. For example, a deep neural network (DNN)-based inference task requires a computation service that involves running libraries and parameters. Thus, caching the service package is necessary for repeatedly running DNN-based inference tasks. On the other hand, as the DNN parameters are usually trained in distribution, IoT devices need to fetch up-to-date parameters for inference task execution. In this work, we consider the joint optimization of computation offloading, service caching, and the AoI metric. We formulate a problem to minimize the weighted sum of the average completion delay, energy consumption, and allocated bandwidth. Then, we propose the AoI-aware service caching-assisted offloading framework (ASCO) to solve it, which consists of the method of Lagrange multipliers with the KKT condition-based offloading module (LMKO), the Lyapunov optimization-based learning and update control module (LLUC), and the Kuhn-Munkres (KM) algorithm-based channel-division fetching module (KCDF). The simulation results demonstrate that our ASCO framework achieves superior performance in regard to time overhead, energy consumption, and allocated bandwidth. It is verified that our ASCO framework not only benefits the individual task but also the global bandwidth allocation.

摘要

物联网(IoT)的快速发展使得边缘计算中的计算卸载成为可能;这是实现无处不在的智能的一种很有前途的范例。由于卸载可能会导致蜂窝网络中的更多流量,因此使用缓存技术来缓解信道负担。例如,基于深度神经网络(DNN)的推理任务需要涉及运行库和参数的计算服务。因此,缓存服务包对于重复运行基于 DNN 的推理任务是必要的。另一方面,由于 DNN 参数通常在分布中进行训练,因此 IoT 设备需要获取最新的参数以执行推理任务。在这项工作中,我们考虑了计算卸载、服务缓存和 AoI 指标的联合优化。我们提出了一个问题,以最小化平均完成延迟、能耗和分配带宽的加权和。然后,我们提出了基于 AoI 感知的服务缓存辅助卸载框架(ASCO)来解决这个问题,它由基于拉格朗日乘子法和 KKT 条件的卸载模块(LMKO)、基于 Lyapunov 优化的学习和更新控制模块(LLUC)和基于 Kuhn-Munkres(KM)算法的信道划分获取模块(KCDF)组成。仿真结果表明,我们的 ASCO 框架在时间开销、能耗和分配带宽方面都具有优越的性能。验证了我们的 ASCO 框架不仅有利于单个任务,而且有利于全局带宽分配。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/26d7/10056929/3ba014489098/sensors-23-03306-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验