An efficient leakage power optimization framework based on reinforcement learning with graph neural network

被引:0
|
作者
Cao, Peng [1 ]
Dong, Yuhan [1 ]
Zhang, Zhanhua [1 ]
Ding, Wenjie [1 ]
Wang, Jiahao [1 ]
机构
[1] Southeast Univ, Natl AS Syst Engn Res Ctr, Nanjing 210000, Peoples R China
来源
SCIENTIFIC REPORTS | 2024年 / 14卷 / 01期
基金
中国国家自然科学基金;
关键词
Threshold voltage; Leakage power; Reinforcement learning; Graph neural network;
D O I
10.1038/s41598-024-76859-z
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Threshold voltage (Vth\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_{th}$$\end{document}) assignment is convenient for leakage optimization due to the exponential relation between leakage power and Vth\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_{th}$$\end{document} by swapping logic cells without routing effort. However, it poses great challenge in large scale circuit design as an NP-hard problem. Machine learning-based approaches have been proposed to solve this problem, aiming to achieve well tradeoff between leakage power reduction and runtime speed up without new induced timing violation. In this paper, a leakage power optimization framework based on reinforcement learning (RL) with graph neural network (GNN) is first-ever proposed to formulate Vth\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_{th}$$\end{document} assignment as a RL process by learning timing and physical characteristics of each circuit instance with GNN. Multiple instances are selected in a non-overlapped manner for each RL action iteration to speed up convergence and decouple timing interdependence along circuit path, where the corresponding reward is carefully defined to tradeoff between leakage reduction and potential timing violation. The proposed framework was validated by the Opencores and IWLS 2005 benchmark circuits with TSMC 28 nm technology. Experimental results demonstrate that our work outperforms prior non-analytical and GNN-based methods with better leakage power optimization by additional 5% to 17% reduction, which is highly consistent with the commercial tool. When transferring the trained RL-based framework to unseen circuits, it achieves the roughly identical leakage optimization results as seen circuit and speed up the runtime by 5.7x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times$$\end{document} to 8.5x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times$$\end{document} compared with commercial tool.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] GRAPHCOMM: A GRAPH NEURAL NETWORK BASED METHOD FOR MULTI-AGENT REINFORCEMENT LEARNING
    Shen, Siqi
    Fu, Yongquan
    Su, Huayou
    Pan, Hengyue
    Qiao, Peng
    Dou, Yong
    Wang, Cheng
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3510 - 3514
  • [22] Production Scheduling based on Deep Reinforcement Learning using Graph Convolutional Neural Network
    Seito, Takanari
    Munakata, Satoshi
    ICAART: PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 2, 2020, : 766 - 772
  • [23] Flexible robotic cell scheduling with graph neural network based deep reinforcement learning
    Wang, Donghai
    Liu, Shun
    Zou, Jing
    Qiao, Wenjun
    Jin, Sun
    JOURNAL OF MANUFACTURING SYSTEMS, 2025, 78 : 81 - 93
  • [24] Counterfactual based reinforcement learning for graph neural networks
    Pham, David
    Zhang, Yongfeng
    ANNALS OF OPERATIONS RESEARCH, 2022,
  • [25] Efficient Neural Network Pruning Using Model-Based Reinforcement Learning
    Bencsik, Blanka
    Szemenyei, Marton
    2022 INTERNATIONAL SYMPOSIUM ON MEASUREMENT AND CONTROL IN ROBOTICS (ISMCR), 2022, : 130 - 137
  • [26] Power Network Topology Optimization and Power Flow Control Based on Deep Reinforcement Learning
    Zhou Y.
    Zhou L.
    Ding J.
    Gao J.
    Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University, 2021, 55 : 7 - 14
  • [27] A Graph Reinforcement Learning Framework for Neural Adaptive Large Neighbourhood Search
    Johnn, Syu-Ning
    Darvariu, Victor-Alexandru
    Handl, Julia
    Kalcsics, Jorg
    COMPUTERS & OPERATIONS RESEARCH, 2024, 172
  • [28] Efficient generation of power system topology diagrams based on Graph Neural Network
    Yang, Chen
    Wu, Shengyang
    Liu, Tao
    He, Yixuan
    Wang, Jingyu
    Shi, Dongyuan
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 149
  • [29] Virtual Network Embedding with Changeable Action Space: An Approach based on Graph Neural Network and Reinforcement Learning
    Tan, Yawen
    Wang, Jiadai
    Liu, Jiajia
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3646 - 3651
  • [30] Graph-EAM: An Interpretable and Efficient Graph Neural Network Potential Framework
    Yang, Jun
    Chen, Zhitao
    Sun, Hong
    Samanta, Amit
    JOURNAL OF CHEMICAL THEORY AND COMPUTATION, 2023, 19 (17) : 5910 - 5923