Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, Beijing 100876, China.
Sensors (Basel). 2020 Jan 22;20(3):610. doi: 10.3390/s20030610.
The mobile edge computing architecture successfully solves the problem of high latency in cloud computing. However, current research focuses on computation offloading and lacks research on service caching issues. To solve the service caching problem, especially for scenarios with high mobility in the Sensor Networks environment, we study the mobility-aware service caching mechanism. Our goal is to maximize the number of users who are served by the local edge-cloud, and we need to make predictions about the user's target location to avoid invalid service requests. First, we propose an idealized geometric model to predict the target area of a user's movement. Since it is difficult to obtain all the data needed by the model in practical applications, we use frequent patterns to mine local moving track information. Then, by using the results of the trajectory data mining and the proposed geometric model, we make predictions about the user's target location. Based on the prediction result and existing service cache, the service request is forwarded to the appropriate base station through the service allocation algorithm. Finally, to be able to train and predict the most popular services online, we propose a service cache selection algorithm based on back-propagation (BP) neural network. The simulation experiments show that our service cache algorithm reduces the service response time by about 13.21% on average compared to other algorithms, and increases the local service proportion by about 15.19% on average compared to the algorithm without mobility prediction.
移动边缘计算架构成功地解决了云计算中高延迟的问题。然而,当前的研究重点是计算卸载,缺乏对服务缓存问题的研究。为了解决服务缓存问题,特别是在传感器网络环境中具有高移动性的场景,我们研究了感知移动性的服务缓存机制。我们的目标是最大化由本地边缘云服务的用户数量,并需要对用户的目标位置进行预测,以避免无效的服务请求。首先,我们提出了一个理想化的几何模型来预测用户移动的目标区域。由于在实际应用中很难获得模型所需的所有数据,我们使用频繁模式挖掘局部移动轨迹信息。然后,通过利用轨迹数据挖掘的结果和提出的几何模型,对用户的目标位置进行预测。基于预测结果和现有的服务缓存,通过服务分配算法将服务请求转发到合适的基站。最后,为了能够在线训练和预测最受欢迎的服务,我们提出了一种基于反向传播(BP)神经网络的服务缓存选择算法。仿真实验表明,与其他算法相比,我们的服务缓存算法平均减少了约 13.21%的服务响应时间,与没有移动性预测的算法相比,平均增加了约 15.19%的本地服务比例。