Suppr超能文献

用于动态物联网网络的MEC工作负载中基于深度强化学习驱动的智能软件定义网络编排部署

DRL-Driven Intelligent SFC Deployment in MEC Workload for Dynamic IoT Networks.

作者信息

Ros Seyha, Ryoo Intae, Kim Seokhoon

机构信息

Department of Software Convergence, Soonchunhyang University, Asan 31538, Republic of Korea.

Department of Computer Engineering, Kyung Hee University, Yongin-si 17104, Republic of Korea.

出版信息

Sensors (Basel). 2025 Jul 8;25(14):4257. doi: 10.3390/s25144257.

Abstract

The rapid increase in the deployment of Internet of Things (IoT) sensor networks has led to an exponential growth in data generation and an unprecedented demand for efficient resource management infrastructure. Ensuring end-to-end communication across multiple heterogeneous network domains is crucial to maintaining Quality of Service (QoS) requirements, such as low latency and high computational capacity, for IoT applications. However, limited computing resources at multi-access edge computing (MEC), coupled with increasing IoT network requests during task offloading, often lead to network congestion, service latency, and inefficient resource utilization, degrading overall system performance. This paper proposes an intelligent task offloading and resource orchestration framework to address these challenges, thereby optimizing energy consumption, computational cost, network congestion, and service latency in dynamic IoT-MEC environments. The framework introduces task offloading and a dynamic resource orchestration strategy, where task offloading to the MEC server ensures an efficient distribution of computation workloads. The dynamic resource orchestration process, Service Function Chaining (SFC) for Virtual Network Functions (VNFs) placement, and routing path determination optimize service execution across the network. To achieve adaptive and intelligent decision-making, the proposed approach leverages Deep Reinforcement Learning (DRL) to dynamically allocate resources and offload task execution, thereby improving overall system efficiency and addressing the optimal policy in edge computing. Deep Q-network (DQN), which is leveraged to learn an optimal network resource adjustment policy and task offloading, ensures flexible adaptation in SFC deployment evaluations. The simulation result demonstrates that the DRL-based scheme significantly outperforms the reference scheme in terms of cumulative reward, reduced service latency, lowered energy consumption, and improved delivery and throughput.

摘要

物联网(IoT)传感器网络部署的迅速增加,导致数据生成呈指数级增长,对高效资源管理基础设施的需求也前所未有的高。确保跨多个异构网络域的端到端通信对于维持物联网应用的服务质量(QoS)要求至关重要,例如低延迟和高计算能力。然而,多接入边缘计算(MEC)处有限的计算资源,加上任务卸载期间不断增加的物联网网络请求,常常导致网络拥塞、服务延迟和资源利用效率低下,从而降低整体系统性能。本文提出了一种智能任务卸载和资源编排框架来应对这些挑战,从而在动态物联网 - MEC环境中优化能耗、计算成本、网络拥塞和服务延迟。该框架引入了任务卸载和动态资源编排策略,其中将任务卸载到MEC服务器可确保计算工作负载的高效分配。动态资源编排过程、用于虚拟网络功能(VNF)放置的服务功能链(SFC)以及路由路径确定可优化整个网络的服务执行。为了实现自适应和智能决策,所提出的方法利用深度强化学习(DRL)来动态分配资源和卸载任务执行,从而提高整体系统效率并解决边缘计算中的最优策略问题。用于学习最优网络资源调整策略和任务卸载的深度Q网络(DQN),可确保在SFC部署评估中实现灵活适应。仿真结果表明,基于DRL的方案在累积奖励、降低服务延迟、降低能耗以及提高交付和吞吐量方面明显优于参考方案。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b5ef/12300470/1b14ff6fcac5/sensors-25-04257-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验