Adaptive Batch Sizes for Active Learning: A Probabilistic Numerics Approach

被引:0
|
作者
Adachi, Masaki [1 ,3 ]
Hayakawa, Satoshi [2 ]
Jorgensen, Martin [4 ]
Wan, Xingchen [1 ]
Vu Nguyen [5 ]
Oberhauser, Harald [2 ]
Osborne, Michael A. [1 ]
机构
[1] Univ Oxford, Machine Learning Res Grp, Oxford, England
[2] Univ Oxford, Math Inst, Oxford, England
[3] Toyota Motor Co Ltd, Toyota, Aichi, Japan
[4] Univ Helsinki, Dept Comp Sci, Helsinki, Finland
[5] Amazon, Seattle, WA USA
关键词
OPTIMIZATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Active learning parallelization is widely used, but typically relies on fixing the batch size throughout experimentation. This fixed approach is inefficient because of a dynamic trade-off between cost and speed-larger batches are more costly, smaller batches lead to slower wall-clock run-times-and the trade-off may change over the run (larger batches are often preferable earlier). To address this trade-off, we propose a novel Probabilistic Numerics framework that adaptively changes batch sizes. By framing batch selection as a quadrature task, our integration-error-aware algorithm facilitates the automatic tuning of batch sizes to meet predefined quadrature precision objectives, akin to how typical optimizers terminate based on convergence thresholds. This approach obviates the necessity for exhaustive searches across all potential batch sizes. We also extend this to scenarios with constrained active learning and constrained optimization, interpreting constraint violations as reductions in the precision requirement, to subsequently adapt batch construction. Through extensive experiments, we demonstrate that our approach significantly enhances learning efficiency and flexibility in diverse Bayesian batch active learning and Bayesian optimization applications.
引用
收藏
页数:32
相关论文
共 50 条
  • [1] Coupling Adaptive Batch Sizes with Learning Rates
    Balles, Lukas
    Romero, Javier
    Hennig, Philipp
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2017), 2017,
  • [2] Adaptive Batch Mode Active Learning
    Chakraborty, Shayok
    Balasubramanian, Vineeth
    Panchanathan, Sethuraman
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2015, 26 (08) : 1747 - 1760
  • [3] An adaptive approach for determining batch sizes using the hidden Markov model
    Taejong Joo
    Minji Seo
    Dongmin Shin
    Journal of Intelligent Manufacturing, 2019, 30 : 917 - 932
  • [4] An adaptive approach for determining batch sizes using the hidden Markov model
    Joo, Taejong
    Seo, Minji
    Shin, Dongmin
    JOURNAL OF INTELLIGENT MANUFACTURING, 2019, 30 (02) : 917 - 932
  • [5] Adaptive batch mode active learning with deep similarity
    Zhang, Kaiyuan
    Qian, Buyue
    Wei, Jishang
    Yin, Changchang
    Cao, Shilei
    Li, Xiaoyu
    Cao, Yanjun
    Zheng, Qinghua
    EGYPTIAN INFORMATICS JOURNAL, 2023, 24 (04)
  • [6] Batch mode active learning via adaptive criteria weights
    Hao Li
    Yongli Wang
    Yanchao Li
    Gang Xiao
    Peng Hu
    Ruxin Zhao
    Applied Intelligence, 2021, 51 : 3475 - 3489
  • [7] Batch mode active learning via adaptive criteria weights
    Li, Hao
    Wang, Yongli
    Li, Yanchao
    Xiao, Gang
    Hu, Peng
    Zhao, Ruxin
    APPLIED INTELLIGENCE, 2021, 51 (06) : 3475 - 3489
  • [8] A batch ensemble approach to active learning with model selection
    Sugiyama, Masashi
    Rubens, Neil
    NEURAL NETWORKS, 2008, 21 (09) : 1278 - 1286
  • [9] Effect of Learning and Forgetting on Batch Sizes
    Teyarachakul, Sunantha
    Chand, Suresh
    Ward, James
    PRODUCTION AND OPERATIONS MANAGEMENT, 2011, 20 (01) : 116 - 128
  • [10] Functional gradient approach to probabilistic minimax active learning
    Ghafarian, Seyed Hossein
    Yazdi, Hadi Sadoghi
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2019, 85 : 21 - 32