• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Intelligent Task Caching in Edge Cloud via Bandit Learning.通过强化学习实现边缘云中的智能任务缓存
IEEE Trans Netw Sci Eng. 2021;8(1). doi: 10.1109/tnse.2020.3047417.
2
A multi-stage heuristic method for service caching and task offloading to improve the cooperation between edge and cloud computing.一种用于服务缓存和任务卸载的多阶段启发式方法,以改善边缘计算与云计算之间的协作。
PeerJ Comput Sci. 2022 Jun 23;8:e1012. doi: 10.7717/peerj-cs.1012. eCollection 2022.
3
Optimal Design of Hierarchical Cloud-Fog&Edge Computing Networks with Caching.具有缓存功能的分层云-雾-边缘计算网络的优化设计
Sensors (Basel). 2020 Mar 12;20(6):1582. doi: 10.3390/s20061582.
4
Collaborative Task Offloading and Service Caching Strategy for Mobile Edge Computing.面向移动边缘计算的协同任务卸载和服务缓存策略。
Sensors (Basel). 2022 Sep 7;22(18):6760. doi: 10.3390/s22186760.
5
Mobility-Aware Service Caching in Mobile Edge Computing for Internet of Things.移动边缘计算中的物联网中移动感知服务缓存
Sensors (Basel). 2020 Jan 22;20(3):610. doi: 10.3390/s20030610.
6
Deep Reinforcement Learning-Enabled Computation Offloading: A Novel Framework to Energy Optimization and Security-Aware in Vehicular Edge-Cloud Computing Networks.基于深度强化学习的计算卸载:车联网边缘云计算网络中能量优化与安全感知的新型框架
Sensors (Basel). 2025 Mar 25;25(7):2039. doi: 10.3390/s25072039.
7
Blockchain Based Decentralized and Proactive Caching Strategy in Mobile Edge Computing Environment.移动边缘计算环境中基于区块链的去中心化主动缓存策略
Sensors (Basel). 2024 Apr 3;24(7):2279. doi: 10.3390/s24072279.
8
Integrated Quality of Service for Offline and Online Services in Edge Networks via Task Offloading and Service Caching.通过任务卸载和服务缓存实现边缘网络中离线和在线服务的综合服务质量
Sensors (Basel). 2024 Jul 18;24(14):4677. doi: 10.3390/s24144677.
9
Delay-aware distributed program caching for IoT-edge networks.面向物联网边缘网络的延迟感知分布式程序缓存
PLoS One. 2022 Jul 19;17(7):e0270183. doi: 10.1371/journal.pone.0270183. eCollection 2022.
10
Deep Reinforcement Learning for Edge Caching with Mobility Prediction in Vehicular Networks.深度强化学习在车联网中具有移动性预测的边缘缓存
Sensors (Basel). 2023 Feb 3;23(3):1732. doi: 10.3390/s23031732.

通过强化学习实现边缘云中的智能任务缓存

Intelligent Task Caching in Edge Cloud via Bandit Learning.

作者信息

Miao Yiming, Hao Yixue, Chen Min, Gharavi Hamid, Hwang Kai

机构信息

School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China.

Department of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China, and also with the Wuhan National Laboratory for optoelectronics, Wuhan 430074, China.

出版信息

IEEE Trans Netw Sci Eng. 2021;8(1). doi: 10.1109/tnse.2020.3047417.

DOI:10.1109/tnse.2020.3047417
PMID:34409117
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8370040/
Abstract

Task caching, based on edge cloud, aims to meet the latency requirements of computation-intensive and data-intensive tasks (such as augmented reality). However, current task caching strategies are generally based on the unrealistic assumption of knowing the pattern of user task requests and ignoring the fact that a task request pattern is more user specific (e.g., the mobility and personalized task demand). Moreover, it disregards the impact of task size and computing amount on the caching strategy. To investigate these issues, in this paper, we first formalize the task caching problem as a non-linear integer programming problem to minimize task latency. We then design a novel intelligent task caching algorithm based on a multiarmed bandit algorithm, called M-adaptive upper confidence bound (M-AUCB). The proposed caching strategy cannot only learn the task patterns of mobile device requests online, but can also dynamically adjust the caching strategy to incorporate the size and computing amount of each task. Moreover, we prove that the M-AUCB algorithm achieves a sublinear regret bound. The results show that, compared with other task caching schemes, the M-AUCB algorithm reduces the average task latency by at least 14.8%.

摘要

基于边缘云的任务缓存旨在满足计算密集型和数据密集型任务(如增强现实)的延迟要求。然而,当前的任务缓存策略通常基于知道用户任务请求模式这一不切实际的假设,而忽略了任务请求模式更具用户特定性这一事实(例如,移动性和个性化任务需求)。此外,它忽略了任务大小和计算量对缓存策略的影响。为了研究这些问题,在本文中,我们首先将任务缓存问题形式化为一个非线性整数规划问题,以最小化任务延迟。然后,我们基于多臂赌博机算法设计了一种新颖的智能任务缓存算法,称为M自适应上置信界(M-AUCB)。所提出的缓存策略不仅可以在线学习移动设备请求的任务模式,还可以动态调整缓存策略以纳入每个任务的大小和计算量。此外,我们证明了M-AUCB算法实现了次线性遗憾界。结果表明,与其他任务缓存方案相比,M-AUCB算法将平均任务延迟降低了至少14.8%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/d432bf97f3a4/nihms-1729024-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/c8f82d4eeda1/nihms-1729024-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/db73449eed9a/nihms-1729024-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/83df376493af/nihms-1729024-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/e49c6ec06a18/nihms-1729024-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/881c1413bf36/nihms-1729024-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/848b3e9db204/nihms-1729024-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/d432bf97f3a4/nihms-1729024-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/c8f82d4eeda1/nihms-1729024-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/db73449eed9a/nihms-1729024-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/83df376493af/nihms-1729024-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/e49c6ec06a18/nihms-1729024-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/881c1413bf36/nihms-1729024-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/848b3e9db204/nihms-1729024-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fba/8370040/d432bf97f3a4/nihms-1729024-f0007.jpg