Device Placement Optimization with Reinforcement Learning

被引:0
|
作者
Mirhoseini, Azalia [1 ]
Pham, Hieu [1 ]
Le, Quoc, V [1 ]
Steiner, Benoit [1 ]
Larsen, Rasmus [1 ]
Zhou, Yuefeng [1 ]
Kumar, Naveen [2 ]
Norouzi, Mohammad [1 ]
Bengio, Samy [1 ]
Dean, Jeff [1 ]
机构
[1] Google Brain, Mountain View, CA 94043 USA
[2] Google, Mountain View, CA USA
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70 | 2017年 / 70卷
关键词
ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The past few years have witnessed a growth in size and computational requirements for training and inference with neural networks. Currently, a common approach to address these requirements is to use a heterogeneous distributed environment with a mixture of hardware devices such as CPUs and GPUs. Importantly, the decision of placing parts of the neural models on devices is often made by human experts based on simple heuristics and intuitions. In this paper, we propose a method which learns to optimize device placement for TensorFlow computational graphs. Key to our method is the use of a sequence-tosequence model to predict which subsets of operations in a TensorFlow graph should run on which of the available devices. The execution time of the predicted placements is then used as the reward signal to optimize the parameters of the sequence-to-sequence model. Our main result is that on Inception-V3 for ImageNet classification, and on RNN LSTM, for language modeling and neural machine translation, our model finds non-trivial device placements that outperform hand-crafted heuristics and traditional algorithmic methods.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Learning Global Optimization by Deep Reinforcement Learning
    da Silva Filho, Moesio Wenceslau
    Barbosa, Gabriel A.
    Miranda, Pericles B. C.
    INTELLIGENT SYSTEMS, PT II, 2022, 13654 : 417 - 433
  • [32] Antenna Beamwidth Optimization in Directional Device-to-Device Communication Using Multi-Agent Deep Reinforcement Learning
    Bahadori, Niloofar
    Nabil, Mahmoud
    Homaifar, Abdollah
    IEEE ACCESS, 2021, 9 : 110601 - 110613
  • [33] Wafer batch device scheduling method combining reverse reinforcement learning and reinforcement learning
    Wang Z.
    Zhang P.
    Zhang J.
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2023, 29 (11): : 3738 - 3749
  • [34] On the Robustness of Controlled Deep Reinforcement Learning for Slice Placement
    Jose Jurandir Alves Esteves
    Amina Boubendir
    Fabrice Guillemin
    Pierre Sens
    Journal of Network and Systems Management, 2022, 30
  • [35] An Edge Server Placement Method Based on Reinforcement Learning
    Luo, Fei
    Zheng, Shuai
    Ding, Weichao
    Fuentes, Joel
    Li, Yong
    ENTROPY, 2022, 24 (03)
  • [36] Guiding FPGA Detailed Placement via Reinforcement Learning
    Esmaeili, P.
    Martin, T.
    Areibi, S.
    Grewal, G.
    PROCEEDINGS OF THE 2022 IFIP/IEEE 30TH INTERNATIONAL CONFERENCE ON VERY LARGE SCALE INTEGRATION (VLSI-SOC), 2022,
  • [37] Respect the Difference: Reinforcement Learning for Heterogeneous FPGA Placement
    Mahmoudi, Fatemehsadat
    Elgammal, Mohamed A.
    Shahrouz, Soheil Gholami
    Murray, Kevin E.
    Betz, Vaughn
    2023 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY, ICFPT, 2023, : 152 - 160
  • [38] Controlled Deep Reinforcement Learning for Optimized Slice Placement
    Esteves, Jose Jurandir Alves
    Boubendir, Amina
    Guillemin, Fabrice
    Sens, Pierre
    2021 IEEE INTERNATIONAL MEDITERRANEAN CONFERENCE ON COMMUNICATIONS AND NETWORKING (IEEE MEDITCOM 2021), 2021, : 20 - 22
  • [39] On the Robustness of Controlled Deep Reinforcement Learning for Slice Placement
    Esteves, Jose Jurandir Alves
    Boubendir, Amina
    Guillemin, Fabrice
    Sens, Pierre
    JOURNAL OF NETWORK AND SYSTEMS MANAGEMENT, 2022, 30 (03)
  • [40] A Reinforcement Learning Based Placement Strategy in Datacenter Networks
    Yang, Weihong
    Qin, Yang
    Yang, ZhaoZheng
    QUALITY, RELIABILITY, SECURITY AND ROBUSTNESS IN HETEROGENEOUS SYSTEMS, 2020, 300 : 87 - 101