Liu Liu, Xu Zhifei
College of Business Administration, Capital University of Economics and Business, Beijing, 100070, China.
School of Science and Engineering, Chinese University of Hong Kong - Shenzhen, Shenzhen, 518172, Guangdong, China.
Sci Rep. 2025 Jul 1;15(1):22056. doi: 10.1038/s41598-025-04652-7.
In the era of rapid technological advancement, Mobile Edge Computing (MEC) has become essential for supporting latency-sensitive applications such as internet of things, autonomous driving, and smart cities. However, efficient resource allocation remains a challenge due to the dynamic nature of MEC environments. The primary difficulties stem from fluctuating workloads, varying network conditions, and heterogeneous computational capabilities, which make real-time task offloading and resource management complex. Traditional centralized approaches suffer from high computational overhead and poor scalability, while conventional machine learning-based methods often require extensive labeled data and fail to adapt quickly in dynamic settings. To address these issues, this study proposes an advanced Multi-Agent Reinforcement Learning (MARL) framework combined with a lightweight neural network, LtNet, to optimize task offloading and resource management in MEC. MARL enables decentralized decision-making, allowing each device to learn optimal offloading strategies and adapt dynamically. Compared to prior single-agent or heuristic methods, our approach improves scalability and efficiency while reducing computational complexity. LtNet further enhances performance using H-Swish activation and selective Squeeze-and-Excitation modules, ensuring lower computational overhead. Experimental results demonstrate that the proposed methods achieve a 12-22% reduction in task completion time, a 5-8% decrease in energy consumption, and consistently high resource utilization, making them highly effective in managing dynamic MEC environments.
在技术快速发展的时代,移动边缘计算(MEC)对于支持对延迟敏感的应用(如物联网、自动驾驶和智慧城市)变得至关重要。然而,由于MEC环境的动态特性,高效的资源分配仍然是一个挑战。主要困难源于波动的工作负载、变化的网络条件和异构的计算能力,这使得实时任务卸载和资源管理变得复杂。传统的集中式方法存在高计算开销和扩展性差的问题,而传统的基于机器学习的方法通常需要大量的标记数据,并且在动态环境中无法快速适应。为了解决这些问题,本研究提出了一种先进的多智能体强化学习(MARL)框架,结合轻量级神经网络LtNet,以优化MEC中的任务卸载和资源管理。MARL实现了分散决策,允许每个设备学习最优的卸载策略并动态适应。与先前的单智能体或启发式方法相比,我们的方法提高了扩展性和效率,同时降低了计算复杂度。LtNet使用H-Swish激活和选择性挤压激励模块进一步提高性能,确保更低的计算开销。实验结果表明,所提出的方法使任务完成时间减少了12%-22%,能耗降低了5%-8%,并始终保持高资源利用率,使其在管理动态MEC环境中非常有效。