A Deep Reinforcement Learning Approach to Configuration Sampling Problem

被引:2
作者
Abolfazli, Amir [1 ]
Spiegetberg, Jakob [2 ]
Palmer, Gregory [1 ]
Anand, Avishek [3 ]
机构
[1] L3S Res Ctr, Hannover, Germany
[2] Volkswagen AG, Wolfsburg, Germany
[3] Delft Univ Technol, Delft, Netherlands
来源
23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023 | 2023年
关键词
reinforcement learning; configuration sampling; software testing; SYSTEMS;
D O I
10.1109/ICDM58522.2023.00009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Configurable software systems have become increasingly popular as they enable customized software variants. The main challenge in dealing with configuration problems is that. the number of possible configurations grows exponentially as the number of features increases. Therefore, algorithms for testing customized software have to deal with the challenge of tractably finding potentially faulty configurations given exponentially large configurations. To overcome this problem, prior works focused on sampling strategies to significantly reduce the number of generated configurations, guaranteeing a high t -wise coverage. In this work, we address the configuration sampling problem by proposing a deep reinforcement learning (DRL) based sampler that efficiently- finds the trade-off between exploration and exploitation, allowing for the efficient identification of a minimal subset of configurations that covers all t: wise feature interactions while minimizing redundancy. We also present the CS -Gym, an environment for the configuration sampling. We benchmark our results against heuristic-based sampling methods on eight different feature models of software product lines and show that our method outperforms all sampling methods in terms of sample size. Our findings indicate that the achieved improvement has major implications for cost reduction, as the reduction in sample size results in fewer configurations that need to be tested.
引用
收藏
页码:1 / 10
页数:10
相关论文
共 50 条
[31]   Job Shop Scheduling Problem Based on Deep Reinforcement Learning [J].
Li, Baoshuai ;
Ye, Chunming .
Computer Engineering and Applications, 2024, 57 (23) :248-254
[32]   Teaching approach for deep reinforcement learning of robotic strategies [J].
Podobnik, Janez ;
Udir, Ana ;
Munih, Marko ;
Mihelj, Matjaz .
COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, 2024, 32 (06)
[33]   A Dynamic Algorithm Configuration Framework Using Combinatorial Problem Features and Reinforcement Learning [J].
Steiner, Elmar ;
Pferschy, Ulrich .
METAHEURISTICS, MIC 2024, PT II, 2024, 14754 :142-157
[34]   Heterogeneous Training Intensity for Federated Learning: A Deep Reinforcement Learning Approach [J].
Zeng, Manying ;
Wang, Xiumin ;
Pan, Weijian ;
Zhou, Pan .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (02) :990-1002
[35]   VCONF: A Reinforcement Learning Approach to Virtual Machines Auto-configuration [J].
Rao, Jia ;
Bu, Xiangping ;
Xu, Cheng-Zhong ;
Wang, Leyi ;
Yin, George .
ACM/IEEE SIXTH INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING AND COMMUNICATIONS (ICAC '09), 2009, :137-146
[36]   The Advance of Reinforcement Learning and Deep Reinforcement Learning [J].
Lyu, Le ;
Shen, Yang ;
Zhang, Sicheng .
2022 IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, BIG DATA AND ALGORITHMS (EEBDA), 2022, :644-648
[37]   Automated construction scheduling using deep reinforcement learning with valid action sampling [J].
Yao, Yuan ;
Tam, Vivian W. Y. ;
Wang, Jun ;
Le, Khoa N. ;
Butera, Anthony .
AUTOMATION IN CONSTRUCTION, 2024, 166
[38]   Fvading Deep Learning -Based Malware Detectors via Obfuscation: A Deep Reinforcement Learning Approach [J].
Etter, Brian ;
Hu, James Lee ;
Ebrahimi, Mohammadreza ;
Li, Weifeng ;
Li, Xin ;
Chen, Hsinchun .
23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023, 2023, :101-109
[39]   Learning Automated Driving in Complex Intersection Scenarios Based on Camera Sensors: A Deep Reinforcement Learning Approach [J].
Li, Guofa ;
Lin, Siyan ;
Li, Shen ;
Qu, Xingda .
IEEE SENSORS JOURNAL, 2022, 22 (05) :4687-4696
[40]   Deep Reinforcement Learning for the Capacitated Pickup and Delivery Problem with Time Windows [J].
Soroka, A. G. ;
Meshcheryakov, A. V. ;
Gerasimov, S. V. .
PATTERN RECOGNITION AND IMAGE ANALYSIS, 2023, 33 (02) :169-178