Zhang Ran, Liu Lei, Dong Mianxiong, Ota Kaoru
School of Software, Shandong University, Jinan 250101, China.
Department of Information and Electronic Engineering, Muroran Institute of Technology, Muroran 050-8585, Japan.
Sensors (Basel). 2024 Feb 2;24(3):980. doi: 10.3390/s24030980.
The development of emerging information technologies, such as the Internet of Things (IoT), edge computing, and blockchain, has triggered a significant increase in IoT application services and data volume. Ensuring satisfactory service quality for diverse IoT application services based on limited network resources has become an urgent issue. Generalized processor sharing (GPS), functioning as a central resource scheduling mechanism guiding differentiated services, stands as a key technology for implementing on-demand resource allocation. The performance prediction of GPS is a crucial step that aims to capture the actual allocated resources using various queue metrics. Some methods (mainly analytical methods) have attempted to establish upper and lower bounds or approximate solutions. Recently, artificial intelligence (AI) methods, such as deep learning, have been designed to assess performance under self-similar traffic. However, the proposed methods in the literature have been developed for specific traffic scenarios with predefined constraints, thus limiting their real-world applicability. Furthermore, the absence of a benchmark in the literature leads to an unfair performance prediction comparison. To address the drawbacks in the literature, an AI-enabled performance benchmark with comprehensive traffic-oriented experiments showcasing the performance of existing methods is presented. Specifically, three types of methods are employed: traditional approximate analytical methods, traditional machine learning-based methods, and deep learning-based methods. Following that, various traffic flows with different settings are collected, and intricate experimental analyses at both the feature and method levels under different traffic conditions are conducted. Finally, insights from the experimental analysis that may be beneficial for the future performance prediction of GPS are derived.
物联网(IoT)、边缘计算和区块链等新兴信息技术的发展,引发了物联网应用服务和数据量的显著增长。基于有限的网络资源为多样化的物联网应用服务确保令人满意的服务质量,已成为一个紧迫的问题。广义处理器共享(GPS)作为一种指导差异化服务的核心资源调度机制,是实现按需资源分配的关键技术。GPS的性能预测是关键一步,旨在使用各种队列指标来获取实际分配的资源。一些方法(主要是分析方法)试图建立上下界或近似解。最近,深度学习等人工智能(AI)方法已被设计用于评估自相似流量下的性能。然而,文献中提出的方法是针对具有预定义约束的特定流量场景开发的,因此限制了它们在现实世界中的适用性。此外,文献中缺乏基准导致性能预测比较不公平。为了解决文献中的缺点,本文提出了一个基于人工智能的性能基准,并通过全面的面向流量的实验展示了现有方法的性能。具体而言,采用了三种类型的方法:传统近似分析方法、传统基于机器学习的方法和基于深度学习的方法。随后,收集了具有不同设置的各种流量流,并在不同流量条件下在特征和方法层面进行了复杂的实验分析。最后,得出了对GPS未来性能预测可能有益的实验分析见解。