Adaptive Batch Sizes for Active Learning: A Probabilistic Numerics Approach

被引:0
|
作者
Adachi, Masaki [1 ,3 ]
Hayakawa, Satoshi [2 ]
Jorgensen, Martin [4 ]
Wan, Xingchen [1 ]
Vu Nguyen [5 ]
Oberhauser, Harald [2 ]
Osborne, Michael A. [1 ]
机构
[1] Univ Oxford, Machine Learning Res Grp, Oxford, England
[2] Univ Oxford, Math Inst, Oxford, England
[3] Toyota Motor Co Ltd, Toyota, Aichi, Japan
[4] Univ Helsinki, Dept Comp Sci, Helsinki, Finland
[5] Amazon, Seattle, WA USA
关键词
OPTIMIZATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Active learning parallelization is widely used, but typically relies on fixing the batch size throughout experimentation. This fixed approach is inefficient because of a dynamic trade-off between cost and speed-larger batches are more costly, smaller batches lead to slower wall-clock run-times-and the trade-off may change over the run (larger batches are often preferable earlier). To address this trade-off, we propose a novel Probabilistic Numerics framework that adaptively changes batch sizes. By framing batch selection as a quadrature task, our integration-error-aware algorithm facilitates the automatic tuning of batch sizes to meet predefined quadrature precision objectives, akin to how typical optimizers terminate based on convergence thresholds. This approach obviates the necessity for exhaustive searches across all potential batch sizes. We also extend this to scenarios with constrained active learning and constrained optimization, interpreting constraint violations as reductions in the precision requirement, to subsequently adapt batch construction. Through extensive experiments, we demonstrate that our approach significantly enhances learning efficiency and flexibility in diverse Bayesian batch active learning and Bayesian optimization applications.
引用
收藏
页数:32
相关论文
共 50 条
  • [31] Starting Small - Learning with Adaptive Sample Sizes
    Daneshmand, Hadi
    Lucchi, Aurelien
    Hofmann, Thomas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [32] Online learning with adaptive local step sizes
    Schraudolph, NN
    NEURAL NETS - WIRN VIETRI-99, 1999, : 151 - 156
  • [33] Active Learning of Probabilistic Movement Primitives
    Conkey, Adam
    Hermans, Tucker
    2019 IEEE-RAS 19TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2019, : 425 - 432
  • [34] Texture analysis: An adaptive probabilistic approach
    Brady, K
    Jermyn, I
    Zerubia, J
    2003 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL 2, PROCEEDINGS, 2003, : 1045 - 1048
  • [35] Probabilistic Active Meta-Learning
    Kaddour, Jean
    Saemundsson, Steindor
    Deisenroth, Marc Peter
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [36] Optimised probabilistic active learning (OPAL)
    Krempl, Georg
    Kottke, Daniel
    Lemaire, Vincent
    MACHINE LEARNING, 2015, 100 (2-3) : 449 - 476
  • [37] A probabilistic approach to metasearching with adaptive probing
    Liu, ZY
    Luo, C
    Cho, JH
    Chu, WW
    20TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING, PROCEEDINGS, 2004, : 547 - 558
  • [38] Probabilistic Active Learning: A Short Proposition
    Krempl, Georg
    Kottke, Daniel
    Spiliopoulou, Myra
    21ST EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (ECAI 2014), 2014, 263 : 1049 - 1050
  • [39] Active learning with the probabilistic RBF classifier
    Constantinopoulos, Constantinos
    Likas, Aristidis
    ARTIFICIAL NEURAL NETWORKS - ICANN 2006, PT 1, 2006, 4131 : 357 - 366
  • [40] Active learning for probabilistic neural networks
    Bolat, B
    Yildirim, T
    ADVANCES IN NATURAL COMPUTATION, PT 1, PROCEEDINGS, 2005, 3610 : 110 - 118