Bandit-NAS: Bandit Sampling Method for Neural Architecture Search

被引:0
|
作者
Lin, Yiqi [1 ]
Wang, Ru [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
来源
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN | 2023年
关键词
Neural Architecture Search; Reinforcement Learning; Bandit Algorithm;
D O I
10.1109/IJCNN54540.2023.10191003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing NAS (Neural Architecture Search) algorithms achieve a low error rate on vision tasks such as image classification by training each child network with equal resources during the search. However, it is not necessary to train with the equal resource or use the fully converge score to obtain the relative performance of each child network, and there is computational redundancy in training all child networks with the equal resource. In this paper, we propose Bandit-NAS to automatically compute the required data slicing and training time for each child network. i): We first model the search of the best child network training time for a given resource into an M-armed bandit problem. ii): Then we propose a reward-flexible bandit algorithm in conjunction with existing reinforcement learning-based NAS algorithms to determine an update strategy. The proposed Bandit-NAS can train M child networks simultaneously under a given resource constraint (training time for one epoch), and the amount of training data is allocated according to the current accuracy of the child networks, thus minimizing the error rate of the child networks. Experiments on CIFAR-10 show that proposed Bandit-NAS performs better the baseline NAS algorithm, e.g., ENAS, with lower error rate and faster searching time.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] EGFA-NAS: a neural architecture search method based on explosion gravitation field algorithm
    Xuemei Hu
    Lan Huang
    Jia Zeng
    Kangping Wang
    Yan Wang
    Complex & Intelligent Systems, 2024, 10 : 1667 - 1687
  • [42] EGFA-NAS: a neural architecture search method based on explosion gravitation field algorithm
    Hu, Xuemei
    Huang, Lan
    Zeng, Jia
    Wang, Kangping
    Wang, Yan
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (02) : 1667 - 1687
  • [43] Pol-NAS: A Neural Architecture Search Method With Feature Selection for PolSAR Image Classification
    Liu, Guangyuan
    Li, Yangyang
    Chen, Yanqiao
    Shang, Ronghua
    Jiao, Licheng
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2022, 15 : 9339 - 9354
  • [44] Regret Bounds for Thompson Sampling in Episodic Restless Bandit Problems
    Jung, Young Hun
    Tewari, Ambuj
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [45] Scalable Discrete Sampling as a Multi-Armed Bandit Problem
    Chen, Yutian
    Ghahramani, Zoubin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [46] A Thompson Sampling Approach to Unifying Causal Inference and Bandit Learning
    Xu, Hanxuan
    Xie, Hong
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT II, 2023, 13936 : 255 - 266
  • [47] Bandit Algorithms Based on Thompson Sampling for Bounded Reward Distributions
    Riou, Charles
    Honda, Junya
    ALGORITHMIC LEARNING THEORY, VOL 117, 2020, 117 : 777 - 826
  • [48] The Choice of Noninformative Priors for Thompson Sampling in Multiparameter Bandit Models
    Lee, Jongyeong
    Chiang, Chao-Kai
    Sugiyama, Masashi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13383 - 13390
  • [49] A PAC algorithm in relative precision for bandit problem with costly sampling
    Friess, Marie Billaud
    Macherey, Arthur
    Nouy, Anthony
    Prieur, Clementine
    MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2022, 96 (02) : 161 - 185
  • [50] A PAC algorithm in relative precision for bandit problem with costly sampling
    Marie Billaud Friess
    Arthur Macherey
    Anthony Nouy
    Clémentine Prieur
    Mathematical Methods of Operations Research, 2022, 96 : 161 - 185