Multi-armed bandit based online model selection for concept-drift adaptation

被引:0
|
作者
Wilson, Jobin [1 ,2 ]
Chaudhury, Santanu [2 ,3 ]
Lall, Brejesh [2 ]
机构
[1] Flytxt, R&D Dept, Trivandrum, Kerala, India
[2] Indian Inst Technol Delhi, Dept Elect Engn, New Delhi, India
[3] Indian Inst Technol Jodhpur, Dept Comp Sci & Engn, Jodhpur, India
关键词
concept-drift; ensemble methods; model selection; multi-armed bandits; CLASSIFICATION; FRAMEWORK;
D O I
10.1111/exsy.13626
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Ensemble methods are among the most effective concept-drift adaptation techniques due to their high learning performance and flexibility. However, they are computationally expensive and pose a challenge in applications involving high-speed data streams. In this paper, we present a computationally efficient heterogeneous classifier ensemble entitled OMS-MAB which uses online model selection for concept-drift adaptation by posing it as a non-stationary multi-armed bandit (MAB) problem. We use a MAB to select a single adaptive learner within the ensemble for learning and prediction while systematically exploring promising alternatives. Each ensemble member is made drift resistant using explicit drift detection and is represented as an arm of the MAB. An exploration factor & varepsilon;$$ \upvarepsilon $$ controls the trade-off between predictive performance and computational resource requirements, eliminating the need to continuously train and evaluate all the ensemble members. A rigorous evaluation on 20 benchmark datasets and 9 algorithms indicates that the accuracy of OMS-MAB is statistically at par with state-of-the-art (SOTA) ensembles. Moreover, it offers a significant reduction in execution time and model size in comparison to several SOTA ensemble methods, making it a promising ensemble for resource constrained stream-mining problems.
引用
收藏
页数:25
相关论文
共 50 条
  • [31] Scaling Multi-Armed Bandit Algorithms
    Fouche, Edouard
    Komiyama, Junpei
    Boehm, Klemens
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1449 - 1459
  • [32] The budgeted multi-armed bandit problem
    Madani, O
    Lizotte, DJ
    Greiner, R
    LEARNING THEORY, PROCEEDINGS, 2004, 3120 : 643 - 645
  • [33] The Multi-Armed Bandit With Stochastic Plays
    Lesage-Landry, Antoine
    Taylor, Joshua A.
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2018, 63 (07) : 2280 - 2286
  • [34] Satisficing in Multi-Armed Bandit Problems
    Reverdy, Paul
    Srivastava, Vaibhav
    Leonard, Naomi Ehrich
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (08) : 3788 - 3803
  • [35] Multi-armed Bandit with Additional Observations
    Yun, Donggyu
    Proutiere, Alexandre
    Ahn, Sumyeong
    Shin, Jinwoo
    Yi, Yung
    PROCEEDINGS OF THE ACM ON MEASUREMENT AND ANALYSIS OF COMPUTING SYSTEMS, 2018, 2 (01)
  • [36] IMPROVING STRATEGIES FOR THE MULTI-ARMED BANDIT
    POHLENZ, S
    MARKOV PROCESS AND CONTROL THEORY, 1989, 54 : 158 - 163
  • [37] MULTI-ARMED BANDIT ALLOCATION INDEXES
    JONES, PW
    JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, 1989, 40 (12) : 1158 - 1159
  • [38] THE MULTI-ARMED BANDIT PROBLEM WITH COVARIATES
    Perchet, Vianney
    Rigollet, Philippe
    ANNALS OF STATISTICS, 2013, 41 (02): : 693 - 721
  • [39] The Multi-fidelity Multi-armed Bandit
    Kandasamy, Kirthevasan
    Dasarathy, Gautam
    Schneider, Jeff
    Poczos, Barnabas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [40] Multi-armed Bandit with Additional Observations
    Yun D.
    Ahn S.
    Proutiere A.
    Shin J.
    Yi Y.
    2018, Association for Computing Machinery, 2 Penn Plaza, Suite 701, New York, NY 10121-0701, United States (46): : 53 - 55