Suppr超能文献

在边缘端将延迟和功耗降至最低。

Minimizing Delay and Power Consumption at the Edge.

作者信息

Gelenbe Erol

机构信息

Institute of Theoretical & Applied Informatics, Polish Academy of Sciences (IITiS-PAN), 44-100 Gliwice, Poland.

Université Côte d'Azur, CNRS I3S, 06107 Nice, France.

出版信息

Sensors (Basel). 2025 Jan 16;25(2):502. doi: 10.3390/s25020502.

Abstract

Edge computing systems must offer low latency at low cost and low power consumption for sensors and other applications, including the IoT, smart vehicles, smart homes, and 6G. Thus, substantial research has been conducted to identify optimum task allocation schemes in this context using non-linear optimization, machine learning, and market-based algorithms. Prior work has mainly focused on two methodologies: (i) formulating non-linear optimizations that lead to NP-hard problems, which are processed via heuristics, and (ii) using AI-based formulations, such as reinforcement learning, that are then tested with simulations. These prior approaches have two shortcomings: (a) there is no guarantee that optimum solutions are achieved, and (b) they do not provide an explicit formula for the fraction of tasks that are allocated to the different servers to achieve a specified optimum. This paper offers a radically different and mathematically based principled method that explicitly computes the optimum fraction of jobs that should be allocated to the different servers to (1) minimize the average latency (delay) of the jobs that are allocated to the edge servers and (2) minimize the average energy consumption of these jobs at the set of edge servers. These results are obtained with a mathematical model of a multiple-server edge system that is managed by a task distribution platform, whose equations are derived and solved using methods from stochastic processes. This approach has low computational cost and provides simple linear complexity formulas to compute the fraction of tasks that should be assigned to the different servers to achieve minimum latency and minimum energy consumption.

摘要

边缘计算系统必须以低成本、低功耗为传感器及包括物联网、智能车辆、智能家居和6G在内的其他应用提供低延迟。因此,已经开展了大量研究,以利用非线性优化、机器学习和基于市场的算法来确定在这种情况下的最优任务分配方案。先前的工作主要集中在两种方法上:(i)制定导致NP难问题的非线性优化,通过启发式方法进行处理;(ii)使用基于人工智能的公式,如强化学习,然后通过模拟进行测试。这些先前的方法有两个缺点:(a)不能保证获得最优解;(b)它们没有为分配给不同服务器以实现指定最优的任务比例提供明确的公式。本文提供了一种截然不同的、基于数学原理的方法,该方法明确计算应分配给不同服务器的作业的最优比例,以(1)最小化分配到边缘服务器的作业的平均延迟(时延),以及(2)最小化这些作业在边缘服务器集处的平均能耗。这些结果是通过一个由任务分配平台管理的多服务器边缘系统的数学模型获得的,该模型的方程是使用随机过程的方法推导和求解的。这种方法具有低计算成本,并提供简单的线性复杂度公式来计算应分配给不同服务器以实现最小延迟和最小能耗的任务比例。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e6e/11768709/ec0508c4d349/sensors-25-00502-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验