• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

孙悟空:基于深度强化学习的大数据平台自适应参数调整。

MonkeyKing: Adaptive Parameter Tuning on Big Data Platforms with Deep Reinforcement Learning.

机构信息

College of Electronics and Information Engineering, Tongji University, Shanghai, China.

School of Computer Science and Technology, Shanghai University of Electric Power, Shanghai, China.

出版信息

Big Data. 2020 Aug;8(4):270-290. doi: 10.1089/big.2019.0123. Epub 2020 Jul 10.

DOI:10.1089/big.2019.0123
PMID:32654536
Abstract

Choosing the right parameter configurations for recurring jobs running on big data analytics platforms is difficult because there can be hundreds of possible parameter configurations to pick from. Even the selection of parameter configurations is based on different types of applications and user requirements. The difference between the best configuration and the worst configuration can have a performance impact of more than 10 times. However, parameters of big data platforms are not independent, which makes it a challenge to automatically identify the optimal configuration for a broad spectrum of applications. To alleviate these problems, we proposed MonkeyKing, a system that leverages past experience and collects new information to adjust parameter configurations of big data platforms. It can recommend key parameters, which have strong impact on performance according to job types, and then combine deep reinforcement learning (DRL) to optimize key parameters to improve job performance. We choose the current popular deep Q-network (DQN) structure and its four improved algorithms, including DQN, Double DQN, Dueling DQN, and the combined Double DQN and Dueling DQN, and finally found that the combined Double DQN and Dueling DQN has a better effect. Our experiments and evaluations on Spark show that performance can be improved by ∼25% under best conditions.

摘要

为运行在大数据分析平台上的重复作业选择正确的参数配置是很困难的,因为可能有数百种可能的参数配置可供选择。即使是参数配置的选择也是基于不同类型的应用程序和用户需求。最佳配置和最差配置之间的差异可能会对性能产生超过 10 倍的影响。然而,大数据平台的参数不是独立的,这使得自动识别广泛应用的最佳配置成为一项挑战。为了缓解这些问题,我们提出了 MonkeyKing,这是一个利用以往经验和收集新信息来调整大数据平台参数配置的系统。它可以根据作业类型推荐对性能有强烈影响的关键参数,然后结合深度强化学习(DRL)来优化关键参数,以提高作业性能。我们选择了当前流行的深度 Q 网络(DQN)结构及其四种改进算法,包括 DQN、Double DQN、Dueling DQN 和 Double DQN 与 Dueling DQN 的组合,并最终发现组合的 Double DQN 和 Dueling DQN 效果更好。我们在 Spark 上的实验和评估表明,在最佳条件下,性能可以提高约 25%。

相似文献

1
MonkeyKing: Adaptive Parameter Tuning on Big Data Platforms with Deep Reinforcement Learning.孙悟空:基于深度强化学习的大数据平台自适应参数调整。
Big Data. 2020 Aug;8(4):270-290. doi: 10.1089/big.2019.0123. Epub 2020 Jul 10.
2
Slicing Resource Allocation Based on Dueling DQN for eMBB and URLLC Hybrid Services in Heterogeneous Integrated Networks.基于对偶 DQN 的切片资源分配在异构集成网络中的 eMBB 和 URLLC 混合服务。
Sensors (Basel). 2023 Feb 24;23(5):2518. doi: 10.3390/s23052518.
3
Deep reinforcement learning for automated radiation adaptation in lung cancer.深度强化学习在肺癌放射自适应中的应用。
Med Phys. 2017 Dec;44(12):6690-6705. doi: 10.1002/mp.12625. Epub 2017 Nov 14.
4
An innovative parameter optimization of Spark Streaming based on D3QN with Gaussian process regression.
Math Biosci Eng. 2023 Jul 3;20(8):14464-14486. doi: 10.3934/mbe.2023647.
5
A Novel Reinforcement Learning Approach for Spark Configuration Parameter Optimization.一种用于 Spark 配置参数优化的新型强化学习方法。
Sensors (Basel). 2022 Aug 8;22(15):5930. doi: 10.3390/s22155930.
6
Approximate Policy-Based Accelerated Deep Reinforcement Learning.基于近似策略的加速深度强化学习
IEEE Trans Neural Netw Learn Syst. 2020 Jun;31(6):1820-1830. doi: 10.1109/TNNLS.2019.2927227. Epub 2019 Aug 6.
7
Constrained Deep Q-Learning Gradually Approaching Ordinary Q-Learning.受限深度Q学习逐步逼近普通Q学习。
Front Neurorobot. 2019 Dec 10;13:103. doi: 10.3389/fnbot.2019.00103. eCollection 2019.
8
Two Tier Slicing Resource Allocation Algorithm Based on Deep Reinforcement Learning and Joint Bidding in Wireless Access Networks.基于深度强化学习和联合竞价的无线接入网络双层切片资源分配算法
Sensors (Basel). 2022 May 4;22(9):3495. doi: 10.3390/s22093495.
9
Enhancing Stability and Performance in Mobile Robot Path Planning with PMR-Dueling DQN Algorithm.基于PMR-决斗深度Q网络算法提升移动机器人路径规划的稳定性与性能
Sensors (Basel). 2024 Feb 27;24(5):1523. doi: 10.3390/s24051523.
10
Multisource Transfer Double DQN Based on Actor Learning.基于演员学习的多源转移双 DQN。
IEEE Trans Neural Netw Learn Syst. 2018 Jun;29(6):2227-2238. doi: 10.1109/TNNLS.2018.2806087.