Bandit-NAS: Bandit Sampling Method for Neural Architecture Search

被引:0
|
作者
Lin, Yiqi [1 ]
Wang, Ru [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
来源
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN | 2023年
关键词
Neural Architecture Search; Reinforcement Learning; Bandit Algorithm;
D O I
10.1109/IJCNN54540.2023.10191003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing NAS (Neural Architecture Search) algorithms achieve a low error rate on vision tasks such as image classification by training each child network with equal resources during the search. However, it is not necessary to train with the equal resource or use the fully converge score to obtain the relative performance of each child network, and there is computational redundancy in training all child networks with the equal resource. In this paper, we propose Bandit-NAS to automatically compute the required data slicing and training time for each child network. i): We first model the search of the best child network training time for a given resource into an M-armed bandit problem. ii): Then we propose a reward-flexible bandit algorithm in conjunction with existing reinforcement learning-based NAS algorithms to determine an update strategy. The proposed Bandit-NAS can train M child networks simultaneously under a given resource constraint (training time for one epoch), and the amount of training data is allocated according to the current accuracy of the child networks, thus minimizing the error rate of the child networks. Experiments on CIFAR-10 show that proposed Bandit-NAS performs better the baseline NAS algorithm, e.g., ENAS, with lower error rate and faster searching time.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Thompson Sampling Based Multi-Armed-Bandit Mechanism Using Neural Networks
    Manisha, Padala
    Gujar, Sujit
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 2111 - 2113
  • [22] Open-NAS: A customizable search space for Neural Architecture Search
    Pouy, Leo
    Khenfri, Fouad
    Leserf, Patrick
    Mhraida, Chokri
    Larouci, Cherif
    PROCEEDINGS OF 2023 8TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING TECHNOLOGIES, ICMLT 2023, 2023, : 102 - 107
  • [23] NAS-BNN: Neural Architecture Search for Binary Neural Networks
    Lin, Zhihao
    Wang, Yongtao
    Zhang, Jinhe
    Chu, Xiaojie
    Ling, Haibin
    PATTERN RECOGNITION, 2025, 159
  • [24] Contextual Bandit for Active Learning: Active Thompson Sampling
    Bouneffouf, Djallel
    Laroche, Romain
    Urvoy, Tanguy
    Feraud, Raphael
    Allesiardo, Robin
    NEURAL INFORMATION PROCESSING (ICONIP 2014), PT I, 2014, 8834 : 405 - 412
  • [25] A neural networks committee for the contextual bandit problem
    Allesiardo, Robin, 1600, Springer Verlag (8834):
  • [26] Optimistic Bayesian Sampling in Contextual-Bandit Problems
    May, Benedict C.
    Korda, Nathan
    Lee, Anthony
    Leslie, David S.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2012, 13 : 2069 - 2106
  • [27] Thompson Sampling for Non-Stationary Bandit Problems
    Qi, Han
    Guo, Fei
    Zhu, Li
    ENTROPY, 2025, 27 (01)
  • [28] Theory of Choice in Bandit, Information Sampling and Foraging Tasks
    Averbeck, Bruno B.
    PLOS COMPUTATIONAL BIOLOGY, 2015, 11 (03)
  • [29] Scalable Neural Contextual Bandit for Recommender Systems
    Zhu, Zheqing
    Van Roy, Benjamin
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3636 - 3646
  • [30] Meta-Learning with Neural Bandit Scheduler
    Qi, Yunzhe
    Ban, Yikun
    Wei, Tianxin
    Zou, Jiaru
    Yao, Huaxiu
    He, Jingrui
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,