Automating the Configuration of MapReduce: A Reinforcement Learning Scheme

被引:8
作者
Mu, Ting-Yu [1 ]
Al-Fuqaha, Ala [1 ,2 ]
Salah, Khaled [3 ]
机构
[1] Western Michigan Univ, Comp Sci Dept, Kalamazoo, MI 49008 USA
[2] Hamad Bin Khalifa Univ, Coll Sci & Engn, Doha, Qatar
[3] Khalifa Univ Sci & Technol, Elect & Comp Engn Dept, Abu Dhabi, U Arab Emirates
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2020年 / 50卷 / 11期
关键词
Deep learning; deep Q-network (DQN); machine learning; MapReduce; neural networks; reinforcement learning (RL); self-configuration;
D O I
10.1109/TSMC.2019.2951789
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the exponential growth of data and the high demand for the analysis of large datasets, the MapReduce framework has been widely utilized to process data in a timely, cost-effective manner. It is well-known that the performance of MapReduce is limited by its default configuration parameters, and there are a few research studies that have focused on finding the optimal configurations to improve the performance of the MapReduce framework. Recently, machine learning based approaches have been receiving more attention to be utilized to auto configure the MapReduce parameters to account for the dynamic nature of the applications. In this article, we propose and develop a reinforcement learning (RL)-based scheme, named RL-MRCONF, to automatically configure the MapReduce parameters. Specifically, we explore and experiment with two variations of RL-MRCONF; one variation is based on the traditional RL algorithm and the second is based on the deep RL algorithm. Results obtained from simulations show that the RL-MRCONF has the ability to successfully and effectively auto-configure the MapReduce parameters dynamically according to changes in job types and computing resources. Moreover, simulation results show our proposed RL-MRCONF scheme outperforms the traditional RL-based implementation. Using datasets provided by MR-Perf, simulation results show that our proposed scheme provides around 50% performance improvement in terms of execution time when compared with MapReduce using default settings.
引用
收藏
页码:4183 / 4196
页数:14
相关论文
共 50 条
[21]   The Misbehavior of Reinforcement Learning [J].
Mongillo, Gianluigi ;
Shteingart, Hanan ;
Loewenstein, Yonatan .
PROCEEDINGS OF THE IEEE, 2014, 102 (04) :528-541
[22]   A novel deep reinforcement learning scheme for task scheduling in cloud computing [J].
K. Siddesha ;
G. V. Jayaramaiah ;
Chandrapal Singh .
Cluster Computing, 2022, 25 :4171-4188
[23]   Pseudo Random Number Generation: a Reinforcement Learning approach [J].
Pasqualini, Luca ;
Parton, Maurizio .
11TH INTERNATIONAL CONFERENCE ON AMBIENT SYSTEMS, NETWORKS AND TECHNOLOGIES (ANT) / THE 3RD INTERNATIONAL CONFERENCE ON EMERGING DATA AND INDUSTRY 4.0 (EDI40) / AFFILIATED WORKSHOPS, 2020, 170 :1122-1127
[24]   Optimizing Multidocument Summarization by Blending Reinforcement Learning Policies [J].
Su D. ;
Su D. ;
Mulvey J.M. ;
Poor H.V. .
IEEE Transactions on Artificial Intelligence, 2023, 4 (03) :416-427
[25]   A Survey of Domain-Specific Architectures for Reinforcement Learning [J].
Rothmann, Marc ;
Porrmann, Mario .
IEEE ACCESS, 2022, 10 :13753-13767
[26]   Chiron: A Robustness-Aware Incentive Scheme for Edge Learning via Hierarchical Reinforcement Learning [J].
Liu, Yi ;
Guo, Song ;
Zhan, Yufeng ;
Wu, Leijie ;
Hong, Zicong ;
Zhou, Qihua .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (08) :8508-8524
[27]   MapReduce based distributed learning algorithm for Restricted Boltzmann Machine [J].
Zhang, Chun-Yang ;
Chen, C. L. Philip ;
Chen, Dewang ;
Ng, Kin Tek .
NEUROCOMPUTING, 2016, 198 :4-11
[28]   Optimized RMDL with Transfer Learning for Sentiment Classification in the MapReduce Framework [J].
Adilakshmi, Konda ;
Srinivas, Malladi ;
Kodali, Anuradha ;
Srilakshmi, V .
JOURNAL OF WEB ENGINEERING, 2023, 22 (08) :1101-1132
[29]   Data Chaos: An Entropy based MapReduce Framework for Scalable Learning [J].
Chen, Jiaoyan ;
Chen, Huajun ;
Chen, Xi ;
Zheng, Guozhou ;
Wu, Zhaohui .
2013 IEEE INTERNATIONAL CONFERENCE ON BIG DATA, 2013,
[30]   Transfer Learning in Deep Reinforcement Learning: A Survey [J].
Zhu, Zhuangdi ;
Lin, Kaixiang ;
Jain, Anil K. ;
Zhou, Jiayu .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) :13344-13362