Device Placement Optimization with Reinforcement Learning

被引:0
|
作者
Mirhoseini, Azalia [1 ]
Pham, Hieu [1 ]
Le, Quoc, V [1 ]
Steiner, Benoit [1 ]
Larsen, Rasmus [1 ]
Zhou, Yuefeng [1 ]
Kumar, Naveen [2 ]
Norouzi, Mohammad [1 ]
Bengio, Samy [1 ]
Dean, Jeff [1 ]
机构
[1] Google Brain, Mountain View, CA 94043 USA
[2] Google, Mountain View, CA USA
关键词
ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The past few years have witnessed a growth in size and computational requirements for training and inference with neural networks. Currently, a common approach to address these requirements is to use a heterogeneous distributed environment with a mixture of hardware devices such as CPUs and GPUs. Importantly, the decision of placing parts of the neural models on devices is often made by human experts based on simple heuristics and intuitions. In this paper, we propose a method which learns to optimize device placement for TensorFlow computational graphs. Key to our method is the use of a sequence-tosequence model to predict which subsets of operations in a TensorFlow graph should run on which of the available devices. The execution time of the predicted placements is then used as the reward signal to optimize the parameters of the sequence-to-sequence model. Our main result is that on Inception-V3 for ImageNet classification, and on RNN LSTM, for language modeling and neural machine translation, our model finds non-trivial device placements that outperform hand-crafted heuristics and traditional algorithmic methods.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] FPGA Placement Optimization with Deep Reinforcement Learning
    Zhang, Junpeng
    Deng, Fang
    Yang, Xudong
    2021 2ND INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND INTELLIGENT CONTROL (ICCEIC 2021), 2021, : 73 - 76
  • [2] Device Placement for Autonomous Vehicles using Reinforcement Learning
    Zheng, Jinkai
    Mu, Phil K.
    Man, Ziqian
    Luan, Tom H.
    Cai, Lin X.
    Shan, Hangguan
    IEEE CONGRESS ON CYBERMATICS / 2021 IEEE INTERNATIONAL CONFERENCES ON INTERNET OF THINGS (ITHINGS) / IEEE GREEN COMPUTING AND COMMUNICATIONS (GREENCOM) / IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING (CPSCOM) / IEEE SMART DATA (SMARTDATA), 2021, : 190 - 196
  • [3] Accelerated Device Placement Optimization with Contrastive Learning
    Lan, Hao
    Chen, Li
    Li, Baochun
    50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, 2021,
  • [4] Adaptive FPGA Placement Optimization via Reinforcement Learning
    Murray, Kevin E.
    Betz, Vaughn
    2019 ACM/IEEE 1ST WORKSHOP ON MACHINE LEARNING FOR CAD (MLCAD), 2019,
  • [5] Device Placement Optimization for Deep Neural Networks via One-shot Model and Reinforcement Learning
    Ding, Zixiang
    Chen, Yaran
    Li, Nannan
    Zhao, Dongbin
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 1478 - 1484
  • [6] Virtual Network Function Placement Optimization With Deep Reinforcement Learning
    Solozabal, Ruben
    Ceberio, Josu
    Sanchoyerto, Aitor
    Zabala, Luis
    Blanco, Bego
    Liberal, Fidel
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (02) : 292 - 303
  • [7] Parameter Optimization of VLSI Placement Through Deep Reinforcement Learning
    Agnesina, Anthony
    Chang, Kyungwook
    Lim, Sung Kyu
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (04) : 1295 - 1308
  • [8] Placement Optimization of Aerial Base Stations with Deep Reinforcement Learning
    Qiu, Jin
    Lyu, Jiangbin
    Fu, Liqun
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [9] VLSI Placement Parameter Optimization using Deep Reinforcement Learning
    Agnesina, Anthony
    Chang, Kyungwook
    Lim, Sung Kyu
    2020 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED-DESIGN (ICCAD), 2020,
  • [10] AN EFFICIENT REINFORCEMENT LEARNING BASED APPROACH FOR SDN CONTROLLER PLACEMENT OPTIMIZATION
    Aboelela, Omnia A.
    Sadek, Rowayda A.
    2024 41ST NATIONAL RADIO SCIENCE CONFERENCE, NRSC 2024, 2024, : 126 - 135