Bandit-NAS: Bandit Sampling Method for Neural Architecture Search

被引:0
|
作者
Lin, Yiqi [1 ]
Wang, Ru [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
关键词
Neural Architecture Search; Reinforcement Learning; Bandit Algorithm;
D O I
10.1109/IJCNN54540.2023.10191003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing NAS (Neural Architecture Search) algorithms achieve a low error rate on vision tasks such as image classification by training each child network with equal resources during the search. However, it is not necessary to train with the equal resource or use the fully converge score to obtain the relative performance of each child network, and there is computational redundancy in training all child networks with the equal resource. In this paper, we propose Bandit-NAS to automatically compute the required data slicing and training time for each child network. i): We first model the search of the best child network training time for a given resource into an M-armed bandit problem. ii): Then we propose a reward-flexible bandit algorithm in conjunction with existing reinforcement learning-based NAS algorithms to determine an update strategy. The proposed Bandit-NAS can train M child networks simultaneously under a given resource constraint (training time for one epoch), and the amount of training data is allocated according to the current accuracy of the child networks, thus minimizing the error rate of the child networks. Experiments on CIFAR-10 show that proposed Bandit-NAS performs better the baseline NAS algorithm, e.g., ENAS, with lower error rate and faster searching time.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Bandit-NAS: Bandit sampling and training method for Neural Architecture Search
    Lin, Yiqi
    Endo, Yuki
    Lee, Jinho
    Kamijo, Shunsuke
    NEUROCOMPUTING, 2024, 597
  • [2] Anti-Bandit for Neural Architecture Search
    Wang, Runqi
    Yang, Linlin
    Chen, Hanlin
    Wang, Wei
    Doermann, David
    Zhang, Baochang
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2023, 131 (10) : 2682 - 2698
  • [3] Anti-Bandit for Neural Architecture Search
    Runqi Wang
    Linlin Yang
    Hanlin Chen
    Wei Wang
    David Doermann
    Baochang Zhang
    International Journal of Computer Vision, 2023, 131 : 2682 - 2698
  • [4] Bandit neural architecture search based on performance evaluation for operation selection
    ZHANG Jian
    GONG Xuan
    LIU YuXiao
    WANG Wei
    WANG Lei
    ZHANG BaoChang
    Science China(Technological Sciences), 2023, (02) : 481 - 488
  • [5] Bandit neural architecture search based on performance evaluation for operation selection
    Jian Zhang
    Xuan Gong
    YuXiao Liu
    Wei Wang
    Lei Wang
    BaoChang Zhang
    Science China Technological Sciences, 2023, 66 : 481 - 488
  • [6] Bandit neural architecture search based on performance evaluation for operation selection
    Zhang, Jian
    Gong, Xuan
    Liu, YuXiao
    Wang, Wei
    Wang, Lei
    Zhang, BaoChang
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2023, 66 (02) : 481 - 488
  • [7] Neural Architecture Search via Combinatorial Multi-Armed Bandit
    Huang, Hanxun
    Ma, Xingjun
    Erfani, Sarah M.
    Bailey, James
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [8] Training Heterogeneous Graph Neural Networks using Bandit Sampling
    Wang, Ta-Yang
    Kannan, Rajgopal
    Prasanna, Viktor
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 4345 - 4349
  • [9] Coordinate Descent with Bandit Sampling
    Salehi, Farnood
    Thiran, Patrick
    Celis, L. Elisa
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [10] Thompson Sampling for the Multinomial Logit Bandit
    Agrawal, Shipra
    Avadhanula, Vashist
    Goyal, Vineet
    Zeevi, Assaf
    MATHEMATICS OF OPERATIONS RESEARCH, 2025,