Automating the Configuration of MapReduce: A Reinforcement Learning Scheme

被引:8
作者
Mu, Ting-Yu [1 ]
Al-Fuqaha, Ala [1 ,2 ]
Salah, Khaled [3 ]
机构
[1] Western Michigan Univ, Comp Sci Dept, Kalamazoo, MI 49008 USA
[2] Hamad Bin Khalifa Univ, Coll Sci & Engn, Doha, Qatar
[3] Khalifa Univ Sci & Technol, Elect & Comp Engn Dept, Abu Dhabi, U Arab Emirates
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2020年 / 50卷 / 11期
关键词
Deep learning; deep Q-network (DQN); machine learning; MapReduce; neural networks; reinforcement learning (RL); self-configuration;
D O I
10.1109/TSMC.2019.2951789
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the exponential growth of data and the high demand for the analysis of large datasets, the MapReduce framework has been widely utilized to process data in a timely, cost-effective manner. It is well-known that the performance of MapReduce is limited by its default configuration parameters, and there are a few research studies that have focused on finding the optimal configurations to improve the performance of the MapReduce framework. Recently, machine learning based approaches have been receiving more attention to be utilized to auto configure the MapReduce parameters to account for the dynamic nature of the applications. In this article, we propose and develop a reinforcement learning (RL)-based scheme, named RL-MRCONF, to automatically configure the MapReduce parameters. Specifically, we explore and experiment with two variations of RL-MRCONF; one variation is based on the traditional RL algorithm and the second is based on the deep RL algorithm. Results obtained from simulations show that the RL-MRCONF has the ability to successfully and effectively auto-configure the MapReduce parameters dynamically according to changes in job types and computing resources. Moreover, simulation results show our proposed RL-MRCONF scheme outperforms the traditional RL-based implementation. Using datasets provided by MR-Perf, simulation results show that our proposed scheme provides around 50% performance improvement in terms of execution time when compared with MapReduce using default settings.
引用
收藏
页码:4183 / 4196
页数:14
相关论文
共 50 条
[41]   A New Interference-Alignment Scheme for Wireless MapReduce [J].
Bi, Yue ;
Wigger, Michele ;
Wu, Yue .
IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, :4393-4398
[42]   Formation control scheme with reinforcement learning strategy for a group of multiple surface vehicles [J].
Nguyen, Khai ;
Dang, Van Trong ;
Pham, Dinh Duong ;
Dao, Phuong Nam .
INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2024, 34 (03) :2252-2279
[43]   Context-Aware Adaptive Route Mutation Scheme: A Reinforcement Learning Approach [J].
Xu, Changqiao ;
Zhang, Tao ;
Kuang, Xiaohui ;
Zhou, Zan ;
Yu, Shui .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (17) :13528-13541
[44]   Approximate Optimal Stabilization Control of Servo Mechanisms based on Reinforcement Learning Scheme [J].
Lv, Yongfeng ;
Ren, Xuemei ;
Hu, Shuangyi ;
Xu, Hao .
INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2019, 17 (10) :2655-2665
[45]   A behavior-based scheme using reinforcement learning for autonomous underwater vehicles [J].
Carreras, M ;
Yuh, J ;
Batlle, J ;
Ridao, P .
IEEE JOURNAL OF OCEANIC ENGINEERING, 2005, 30 (02) :416-427
[46]   Reinforcement learning for demand response: A review of algorithms and modeling techniques [J].
Vazquez-Canteli, Jose R. ;
Nagy, Zoltan .
APPLIED ENERGY, 2019, 235 :1072-1089
[47]   Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics [J].
Mosavi, Amirhosein ;
Faghan, Yaser ;
Ghamisi, Pedram ;
Puhong Duan ;
Ardabili, Sina Faizollahzadeh ;
Salwana, Ely ;
Band, Shahab S. .
MATHEMATICS, 2020, 8 (10)
[48]   Using Reinforcement Learning to Perform Qubit Routing in Quantum Compilers [J].
Pozzi, Matteo G. ;
Herbert, Steven J. ;
Sengupta, Akash ;
Mullins, Robert D. .
ACM TRANSACTIONS ON QUANTUM COMPUTING, 2022, 3 (02)
[49]   Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications [J].
Vamvakas, Dimitrios ;
Michailidis, Panagiotis ;
Korkas, Christos ;
Kosmatopoulos, Elias .
ENERGIES, 2023, 16 (14)
[50]   Approximate Optimal Stabilization Control of Servo Mechanisms based on Reinforcement Learning Scheme [J].
Yongfeng Lv ;
Xuemei Ren ;
Shuangyi Hu ;
Hao Xu .
International Journal of Control, Automation and Systems, 2019, 17 :2655-2665