FASTune: Towards Fast and Stable Database Tuning System with Reinforcement Learning

被引:0
作者
Shi, Lei [1 ,2 ,3 ]
Li, Tian [2 ]
Wei, Lin [1 ]
Tao, Yongcai [2 ]
Li, Cuixia [1 ]
Gao, Yufei [1 ,3 ]
机构
[1] Zhengzhou Univ, Sch Cyber Sci & Engn, Zhengzhou 450002, Peoples R China
[2] Zhengzhou Univ, Sch Comp & Artificial Intelligence, Zhengzhou 450001, Peoples R China
[3] Songshan Lab, Zhengzhou 450046, Peoples R China
关键词
database tuning; reinforcement learning; decision making; deep learning;
D O I
10.3390/electronics12102168
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Configuration tuning is vital to achieving high performance for a database management system (DBMS). Recently, automatic tuning methods using Reinforcement Learning (RL) have been explored to find better configurations compared with database administrators (DBAs) and heuristics. However, existing RL-based methods still have several limitations: (1) Excessive overhead due to reliance on cloned databases; (2) trial-and-error strategy may produce dangerous configurations that lead to database failure; (3) lack the ability to handle dynamic workload. To address the above challenges, a fast and stable RL-based database tuning system, FASTune, is proposed. A virtual environment is proposed to evaluate configurations which is an equivalent yet more efficient scheme than the cloned database. To ensure stability during tuning, FASTune adopts an environment proxy to avoid dangerous configurations. In addition, a Multi-State Soft Actor-Critic (MS-SAC) model is proposed to handle dynamic workloads, which utilizes the soft actor-critic network to tune the database according to workload and database states. The experimental results indicate that, compared with the state-of-the-art methods, FASTune can achieve improvements in performance while maintaining stability in the tuning.
引用
收藏
页数:22
相关论文
共 50 条
[31]   Self-Play Reinforcement Learning for Fast Image Retargeting [J].
Kajiura, Nobukatsu ;
Kosugi, Satoshi ;
Wang, Xueting ;
Yamasaki, Toshihiko .
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, :1755-1763
[32]   Computational Characteristics of the Striatal Dopamine System Described by Reinforcement Learning With Fast Generalization [J].
Fujita, Yoshihisa ;
Yagishita, Sho ;
Kasai, Haruo ;
Ishii, Shin .
FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2020, 14
[33]   Fast conflict resolution based on reinforcement learning in multi-agent system [J].
Piao, SH ;
Hong, BR ;
Chu, HT .
CHINESE JOURNAL OF ELECTRONICS, 2004, 13 (01) :92-95
[34]   Towards verifiable Benchmarks for Reinforcement Learning [J].
Muller-Brockhausen, Matthias ;
Plaat, Aske ;
Preuss, Mike .
2022 IEEE CONFERENCE ON GAMES, COG, 2022, :159-166
[35]   Towards designing a generic and comprehensive deep reinforcement learning framework [J].
Ngoc Duy Nguyen ;
Thanh Thi Nguyen ;
Nhat Truong Pham ;
Hai Nguyen ;
Dang Tu Nguyen ;
Thanh Dang Nguyen ;
Chee Peng Lim ;
Michael Johnstone ;
Asim Bhatti ;
Douglas Creighton ;
Saeid Nahavandi .
Applied Intelligence, 2023, 53 :2967-2988
[36]   Towards designing a generic and comprehensive deep reinforcement learning framework [J].
Nguyen, Ngoc Duy ;
Nguyen, Thanh Thi ;
Pham, Nhat Truong ;
Nguyen, Hai ;
Nguyen, Dang Tu ;
Nguyen, Thanh Dang ;
Lim, Chee Peng ;
Johnstone, Michael ;
Bhatti, Asim ;
Creighton, Douglas ;
Nahavandi, Saeid .
APPLIED INTELLIGENCE, 2023, 53 (03) :2967-2988
[37]   Towards a Deep Reinforcement Learning Approach for Tower Line Wars [J].
Andersen, Per-Arne ;
Goodwin, Morten ;
Granmo, Ole-Christoffer .
ARTIFICIAL INTELLIGENCE XXXIV, AI 2017, 2017, 10630 :101-114
[38]   Learning how to learn by self-tuning reinforcement [J].
Torsell, Christian ;
Barrett, Jeffrey A. .
SYNTHESE, 2024, 203 (06)
[39]   Budget-aware Index Tuning with Reinforcement Learning [J].
Wu, Wentao ;
Wang, Chi ;
Siddiqui, Tarique ;
Wang, Junxiong ;
Narasayya, Vivek ;
Chaudhuri, Surajit ;
Bernstein, Philip A. .
PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, :1528-1541
[40]   A Framework for Automated Cellular Network Tuning With Reinforcement Learning [J].
Mismar, Faris B. ;
Choi, Jinseok ;
Evans, Brian L. .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2019, 67 (10) :7152-7167