A Deep Reinforcement Learning Approach to Configuration Sampling Problem

被引:0
作者
Abolfazli, Amir [1 ]
Spiegetberg, Jakob [2 ]
Palmer, Gregory [1 ]
Anand, Avishek [3 ]
机构
[1] L3S Res Ctr, Hannover, Germany
[2] Volkswagen AG, Wolfsburg, Germany
[3] Delft Univ Technol, Delft, Netherlands
来源
23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023 | 2023年
关键词
reinforcement learning; configuration sampling; software testing; SYSTEMS;
D O I
10.1109/CAMSAP58249.2023.00009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Configurable software systems have become increasingly popular as they enable customized software variants. The main challenge in dealing with configuration problems is that. the number of possible configurations grows exponentially as the number of features increases. Therefore, algorithms for testing customized software have to deal with the challenge of tractably finding potentially faulty configurations given exponentially large configurations. To overcome this problem, prior works focused on sampling strategies to significantly reduce the number of generated configurations, guaranteeing a high t -wise coverage. In this work, we address the configuration sampling problem by proposing a deep reinforcement learning (DRL) based sampler that efficiently- finds the trade-off between exploration and exploitation, allowing for the efficient identification of a minimal subset of configurations that covers all t: wise feature interactions while minimizing redundancy. We also present the CS -Gym, an environment for the configuration sampling. We benchmark our results against heuristic-based sampling methods on eight different feature models of software product lines and show that our method outperforms all sampling methods in terms of sample size. Our findings indicate that the achieved improvement has major implications for cost reduction, as the reduction in sample size results in fewer configurations that need to be tested.
引用
收藏
页码:1 / 10
页数:10
相关论文
共 50 条
[21]   A reinforcement learning approach to the stochastic cutting stock problem [J].
Pitombeira-Neto, Anselmo R. ;
Murta, Arthur H. F. .
EURO JOURNAL ON COMPUTATIONAL OPTIMIZATION, 2022, 10
[22]   A Reinforcement Learning Approach for Solving the Fragment Assembly Problem [J].
Bocicor, Maria-Iuliana ;
Czibula, Gabriela ;
Czibula, Istvan-Gergely .
13TH INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING (SYNASC 2011), 2012, :191-198
[23]   A novel approach to feedback control with deep reinforcement learning [J].
Bidi, Kala Agbo ;
Coron, Jean-Michel ;
Hayat, Amaury ;
Lichtle, Nathan .
SYSTEMS & CONTROL LETTERS, 2025, 202
[24]   A Novel Approach to Feedback Control with Deep Reinforcement Learning [J].
Wang, Yuan ;
Velswamy, Kirubakaran ;
Huang, Biao .
IFAC PAPERSONLINE, 2018, 51 (18) :31-36
[25]   Deep reinforcement learning for the dynamic and uncertain vehicle routing problem [J].
Pan, Weixu ;
Liu, Shi Qiang .
APPLIED INTELLIGENCE, 2023, 53 (01) :405-422
[26]   A reinforcement learning approach to competitive ordering and pricing problem [J].
Dogan, Ibrahim ;
Guener, Ali R. .
EXPERT SYSTEMS, 2015, 32 (01) :39-48
[27]   Teaching approach for deep reinforcement learning of robotic strategies [J].
Podobnik, Janez ;
Udir, Ana ;
Munih, Marko ;
Mihelj, Matjaz .
COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, 2024, 32 (06)
[28]   Deep reinforcement learning for the dynamic and uncertain vehicle routing problem [J].
Weixu Pan ;
Shi Qiang Liu .
Applied Intelligence, 2023, 53 :405-422
[29]   Solving the train dispatching problem via deep reinforcement learning [J].
Agasucci, Valerio ;
Grani, Giorgio ;
Lamorgese, Leonardo .
JOURNAL OF RAIL TRANSPORT PLANNING & MANAGEMENT, 2023, 26
[30]   Deep Reinforcement Learning for Truck-Drone Delivery Problem [J].
Bi, Zhiliang ;
Guo, Xiwang ;
Wang, Jiacun ;
Qin, Shujin ;
Liu, Guanjun .
DRONES, 2023, 7 (07)