Adaptive Batch Sizes for Active Learning: A Probabilistic Numerics Approach

被引:0
|
作者
Adachi, Masaki [1 ,3 ]
Hayakawa, Satoshi [2 ]
Jorgensen, Martin [4 ]
Wan, Xingchen [1 ]
Vu Nguyen [5 ]
Oberhauser, Harald [2 ]
Osborne, Michael A. [1 ]
机构
[1] Univ Oxford, Machine Learning Res Grp, Oxford, England
[2] Univ Oxford, Math Inst, Oxford, England
[3] Toyota Motor Co Ltd, Toyota, Aichi, Japan
[4] Univ Helsinki, Dept Comp Sci, Helsinki, Finland
[5] Amazon, Seattle, WA USA
关键词
OPTIMIZATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Active learning parallelization is widely used, but typically relies on fixing the batch size throughout experimentation. This fixed approach is inefficient because of a dynamic trade-off between cost and speed-larger batches are more costly, smaller batches lead to slower wall-clock run-times-and the trade-off may change over the run (larger batches are often preferable earlier). To address this trade-off, we propose a novel Probabilistic Numerics framework that adaptively changes batch sizes. By framing batch selection as a quadrature task, our integration-error-aware algorithm facilitates the automatic tuning of batch sizes to meet predefined quadrature precision objectives, akin to how typical optimizers terminate based on convergence thresholds. This approach obviates the necessity for exhaustive searches across all potential batch sizes. We also extend this to scenarios with constrained active learning and constrained optimization, interpreting constraint violations as reductions in the precision requirement, to subsequently adapt batch construction. Through extensive experiments, we demonstrate that our approach significantly enhances learning efficiency and flexibility in diverse Bayesian batch active learning and Bayesian optimization applications.
引用
收藏
页数:32
相关论文
共 50 条
  • [21] A Bayesian Active Learning Approach to Adaptive Motion Planning
    Choudhury, Sanjiban
    Srinivasa, Siddhartha S.
    ROBOTICS RESEARCH, 2020, 10 : 33 - 40
  • [22] Probabilistic Bayesian Deep Learning Approach for Online Forecasting of Fed-Batch Fermentation
    Wang, Tao
    You, Jiebing
    Gong, Xiugang
    Yang, Shanliang
    Wang, Lei
    Chang, Zheng
    ACS OMEGA, 2023, 8 (28): : 25272 - 25278
  • [23] Probabilistic Active Learning in Datastreams
    Kottke, Daniel
    Krempl, Georg
    Spiliopoulou, Myra
    ADVANCES IN INTELLIGENT DATA ANALYSIS XIV, 2015, 9385 : 145 - 157
  • [24] Learning with Dynamic Architectures for Artificial Neural Networks - Adaptive Batch Size Approach
    Saeed, Reham
    Ghnemat, Rawan
    Benbrahim, Ghassen
    Elhassan, Ammar
    2019 2ND INTERNATIONAL CONFERENCE ON NEW TRENDS IN COMPUTING SCIENCES (ICTCS), 2019, : 302 - 305
  • [25] Embedding active learning in batch-to-batch optimization using reinforcement learning
    Byun, Ha-Eun
    Kim, Boeun
    Lee, Jay H.
    AUTOMATICA, 2023, 157
  • [26] Removing noise in on-line search using adaptive batch sizes
    Orr, GB
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 9: PROCEEDINGS OF THE 1996 CONFERENCE, 1997, 9 : 232 - 238
  • [27] Deep Learning of Adaptive Control Systems Based on a Logical-probabilistic Approach
    Demin, A., V
    BULLETIN OF IRKUTSK STATE UNIVERSITY-SERIES MATHEMATICS, 2021, 38 : 65 - 83
  • [28] An Adaptive Approach for Probabilistic Wind Power Forecasting Based on Meta-Learning
    Meng, Zichao
    Guo, Ye
    Sun, Hongbin
    IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, 2024, 15 (03) : 1814 - 1833
  • [29] Dynamic Batch Mode Active Learning
    Chakraborty, Shayok
    Balasubramanian, Vineeth
    Panchanathan, Sethuraman
    2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011,
  • [30] Batch Decorrelation for Active Metric Learning
    Kumari, Priyadarshini
    Goru, Ritesh
    Chaudhuri, Siddhartha
    Chaudhuri, Subhasis
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2255 - 2261