Multi-armed bandit based online model selection for concept-drift adaptation

被引:0
|
作者
Wilson, Jobin [1 ,2 ]
Chaudhury, Santanu [2 ,3 ]
Lall, Brejesh [2 ]
机构
[1] Flytxt, R&D Dept, Trivandrum, Kerala, India
[2] Indian Inst Technol Delhi, Dept Elect Engn, New Delhi, India
[3] Indian Inst Technol Jodhpur, Dept Comp Sci & Engn, Jodhpur, India
关键词
concept-drift; ensemble methods; model selection; multi-armed bandits; CLASSIFICATION; FRAMEWORK;
D O I
10.1111/exsy.13626
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Ensemble methods are among the most effective concept-drift adaptation techniques due to their high learning performance and flexibility. However, they are computationally expensive and pose a challenge in applications involving high-speed data streams. In this paper, we present a computationally efficient heterogeneous classifier ensemble entitled OMS-MAB which uses online model selection for concept-drift adaptation by posing it as a non-stationary multi-armed bandit (MAB) problem. We use a MAB to select a single adaptive learner within the ensemble for learning and prediction while systematically exploring promising alternatives. Each ensemble member is made drift resistant using explicit drift detection and is represented as an arm of the MAB. An exploration factor & varepsilon;$$ \upvarepsilon $$ controls the trade-off between predictive performance and computational resource requirements, eliminating the need to continuously train and evaluate all the ensemble members. A rigorous evaluation on 20 benchmark datasets and 9 algorithms indicates that the accuracy of OMS-MAB is statistically at par with state-of-the-art (SOTA) ensembles. Moreover, it offers a significant reduction in execution time and model size in comparison to several SOTA ensemble methods, making it a promising ensemble for resource constrained stream-mining problems.
引用
收藏
页数:25
相关论文
共 50 条
  • [21] A multi-armed bandit approach for exploring partially observed networks
    Kaushalya Madhawa
    Tsuyoshi Murata
    Applied Network Science, 4
  • [22] A Bayesian Multi-armed Bandit Approach for Identifying Human Vulnerabilities
    Miehling, Erik
    Xiao, Baicen
    Poovendran, Radha
    Basar, Tamer
    DECISION AND GAME THEORY FOR SECURITY, GAMESEC 2018, 2018, 11199 : 521 - 539
  • [23] FedAB: Truthful Federated Learning With Auction-Based Combinatorial Multi-Armed Bandit
    Wu, Chenrui
    Zhu, Yifei
    Zhang, Rongyu
    Chen, Yun
    Wang, Fangxin
    Cui, Shuguang
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (17) : 15159 - 15170
  • [24] Multi-User Communication Networks: A Coordinated Multi-Armed Bandit Approach
    Avner, Orly
    Mannor, Shie
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2019, 27 (06) : 2192 - 2207
  • [25] Randomized allocation with nonparametric estimation for a multi-armed bandit problem with covariates
    Yang, YH
    Zhu, D
    ANNALS OF STATISTICS, 2002, 30 (01) : 100 - 121
  • [26] A Simple Multi-Armed Nearest-Neighbor Bandit for Interactive Recommendation
    Sanz-Cruzado, Javier
    Castells, Pablo
    Lopez, Esther
    RECSYS 2019: 13TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2019, : 358 - 362
  • [27] Multi-Armed Bandit Algorithm Policy for LoRa Network Performance Enhancement
    Askhedkar, Anjali R.
    Chaudhari, Bharat S.
    JOURNAL OF SENSOR AND ACTUATOR NETWORKS, 2023, 12 (03)
  • [28] On Multi-Armed Bandit Designs for Dose-Finding Clinical Trials
    Aziz, Maryam
    Kaufmann, Emilie
    Riviere, Marie-Karelle
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [29] On Ensemble Components Selection in Data Streams Scenario with Reoccurring Concept-Drift
    Duda, Piotr
    Jaworski, Maciej
    Rutkowski, Leszek
    2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017, : 1821 - 1827
  • [30] On Ensemble Components Selection in Data Streams Scenario with Gradual Concept-Drift
    Duda, Piotr
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING (ICAISC 2018), PT II, 2018, 10842 : 311 - 320