Multi-armed bandit based online model selection for concept-drift adaptation

被引:0
|
作者
Wilson, Jobin [1 ,2 ]
Chaudhury, Santanu [2 ,3 ]
Lall, Brejesh [2 ]
机构
[1] Flytxt, R&D Dept, Trivandrum, Kerala, India
[2] Indian Inst Technol Delhi, Dept Elect Engn, New Delhi, India
[3] Indian Inst Technol Jodhpur, Dept Comp Sci & Engn, Jodhpur, India
关键词
concept-drift; ensemble methods; model selection; multi-armed bandits; CLASSIFICATION; FRAMEWORK;
D O I
10.1111/exsy.13626
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Ensemble methods are among the most effective concept-drift adaptation techniques due to their high learning performance and flexibility. However, they are computationally expensive and pose a challenge in applications involving high-speed data streams. In this paper, we present a computationally efficient heterogeneous classifier ensemble entitled OMS-MAB which uses online model selection for concept-drift adaptation by posing it as a non-stationary multi-armed bandit (MAB) problem. We use a MAB to select a single adaptive learner within the ensemble for learning and prediction while systematically exploring promising alternatives. Each ensemble member is made drift resistant using explicit drift detection and is represented as an arm of the MAB. An exploration factor & varepsilon;$$ \upvarepsilon $$ controls the trade-off between predictive performance and computational resource requirements, eliminating the need to continuously train and evaluate all the ensemble members. A rigorous evaluation on 20 benchmark datasets and 9 algorithms indicates that the accuracy of OMS-MAB is statistically at par with state-of-the-art (SOTA) ensembles. Moreover, it offers a significant reduction in execution time and model size in comparison to several SOTA ensemble methods, making it a promising ensemble for resource constrained stream-mining problems.
引用
收藏
页数:25
相关论文
共 50 条
  • [41] Automating model management: a survey on metaheuristics for concept-drift adaptation
    Mike Riess
    Journal of Data, Information and Management, 2022, 4 (3-4): : 211 - 229
  • [42] RESEARCH ON OPTIMAL SELECTION STRATEGY OF SEARCH ENGINE KEYWORDS BASED ON MULTI-ARMED BANDIT
    Qin, Juan
    Qi, Wei
    Zhou, Baojian
    PROCEEDINGS OF THE 49TH ANNUAL HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES (HICSS 2016), 2016, : 726 - 734
  • [43] A Sensing Policy Based on Confidence Bounds and a Restless Multi-Armed Bandit Model
    Oksanen, Jan
    Koivunen, Visa
    Poor, H. Vincent
    2012 CONFERENCE RECORD OF THE FORTY SIXTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS AND COMPUTERS (ASILOMAR), 2012, : 318 - 323
  • [44] Multiagent Multi-Armed Bandit Schemes for Gateway Selection in UAV Networks
    Hashima, Sherief
    Hatano, Kohei
    Mohamed, Ehab Mahmoud
    2020 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2020,
  • [45] Learning State Selection for Reconfigurable Antennas: A Multi-Armed Bandit Approach
    Gulati, Nikhil
    Dandekar, Kapil R.
    IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, 2014, 62 (03) : 1027 - 1038
  • [46] Robust Trajectory Selection for Rearrangement Planning as a Multi-Armed Bandit Problem
    Koval, Michael C.
    King, Jennifer E.
    Pollard, Nancy S.
    Srinivasa, Siddhartha S.
    2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2015, : 2678 - 2685
  • [47] Automated Collaborator Selection for Federated Learning with Multi-armed Bandit Agents
    Larsson, Hannes
    Riaz, Hassam
    Ickin, Selim
    PROCEEDINGS OF THE 4TH FLEXNETS WORKSHOP ON FLEXIBLE NETWORKS, ARTIFICIAL INTELLIGENCE SUPPORTED NETWORK FLEXIBILITY AND AGILITY (FLEXNETS'21), 2021, : 44 - 49
  • [48] CONTEXTUAL MULTI-ARMED BANDIT ALGORITHMS FOR PERSONALIZED LEARNING ACTION SELECTION
    Manickam, Indu
    Lan, Andrew S.
    Baraniuk, Richard G.
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 6344 - 6348
  • [49] Contextual Multi-armed Bandit Algorithm for Semiparametric Reward Model
    Kim, Gi-Soo
    Paik, Myunghee Cho
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [50] Tug-of-War Model for Multi-armed Bandit Problem
    Kim, Song-Ju
    Aono, Masashi
    Hara, Masahiko
    UNCONVENTIONAL COMPUTATION, PROCEEDINGS, 2010, 6079 : 69 - +