Deep Reinforcement Learning-Based Optimal Parameter Design of Power Converters

被引:2
|
作者
Bui, Van-Hai [1 ,4 ]
Chang, Fangyuan [1 ]
Su, Wencong [1 ]
Wang, Mengqi [1 ]
Murphey, Yi Lu [1 ]
Da Silva, Felipe Leno [2 ]
Huang, Can [2 ]
Xue, Lingxiao [3 ]
Glatt, Ruben [2 ]
机构
[1] Univ Michigan Dearborn, Dept Elect & Comp Engn, Coll Engn & Comp Sci, Dearborn, MI 48128 USA
[2] Lawrence Livermore Natl Lab LLNL, Livermore, CA 94550 USA
[3] Oak Ridge Natl Lab ORNL, Oak Ridge, TN 37830 USA
[4] State Univ New York SUNY Maritime Coll, Dept Elect Engn, Throggs Neck, NY 10465 USA
关键词
deep reinforcement learning; deep neural networks; optimal parameters design; optimization; power converters; OPTIMIZATION; FREQUENCY; PFC;
D O I
10.1109/ICNC57223.2023.10074355
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The optimal design of power converters often requires a long time to process with a huge number of simulations to determine the optimal parameters. To reduce the design cycle, this paper proposes a proximal policy optimization (PPO)-based model to optimize the design parameters for Buck and Boost converters. In each training step, the learning agent carries out an action that adjusts the value of the design parameters and interacts with a dynamic Simulink model. The simulation provides feedback on power efficiency and helps the learning agent in optimizing parameter design. Unlike deep Q-learning and standard actor-critic algorithms, PPO includes a clipped objective function and the function avoids the new policy from changing too far from the old policy. This allows the proposed model to accelerate and stabilize the learning process. Finally, to show the effectiveness of the proposed method, the performance of different optimization algorithms is compared on two popular power converter topologies.
引用
收藏
页码:25 / 29
页数:5
相关论文
共 50 条
  • [41] Reinforcement learning-based optimal hull form design with variations in fore and aft parts
    Oh, Se-Jin
    Oh, Min-Jae
    Son, Eun-Young
    JOURNAL OF COMPUTATIONAL DESIGN AND ENGINEERING, 2024, 11 (06) : 1 - 19
  • [42] Deep reinforcement learning-based robust missile guidance
    Ahn, Jeongsu
    Shin, Jongho
    Kim, Hyeong-Geun
    2022 22ND INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2022), 2022, : 927 - 930
  • [43] A Deep Reinforcement Learning-Based Approach in Porker Game
    Kong, Yan
    Rui, Yefeng
    Hsia, Chih-Hsien
    Journal of Computers (Taiwan), 2023, 34 (02) : 41 - 51
  • [44] A Deep Reinforcement Learning-Based Framework for Content Caching
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    2018 52ND ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2018,
  • [45] Optimal Design of Planar Microwave Microfluidic Sensors Based on Deep Reinforcement Learning
    Wang, Bin-Xiao
    Zhao, Wen-Sheng
    Wang, Da-Wei
    Wang, Junchao
    Li, Wenjun
    Liu, Jun
    IEEE SENSORS JOURNAL, 2021, 21 (24) : 27441 - 27449
  • [46] Deep Reinforcement Learning-based Traffic Signal Control
    Ruan, Junyun
    Tang, Jinzhuo
    Gao, Ge
    Shi, Tianyu
    Khamis, Alaa
    2023 IEEE INTERNATIONAL CONFERENCE ON SMART MOBILITY, SM, 2023, : 21 - 26
  • [47] Deep reinforcement learning-based antilock braking algorithm
    Mantripragada, V. Krishna Teja
    Kumar, R. Krishna
    VEHICLE SYSTEM DYNAMICS, 2023, 61 (05) : 1410 - 1431
  • [48] Deep Reinforcement Learning-Based Defense Strategy Selection
    Charpentier, Axel
    Boulahia-Cuppens, Nora
    Cuppens, Frederic
    Yaich, Reda
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY AND SECURITY, ARES 2022, 2022,
  • [49] Computing on Wheels: A Deep Reinforcement Learning-Based Approach
    Kazmi, S. M. Ahsan
    Tai Manh Ho
    Tuong Tri Nguyen
    Fahim, Muhammad
    Khan, Adil
    Piran, Md Jalil
    Baye, Gaspard
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (11) : 22535 - 22548
  • [50] Deep Reinforcement Learning-based Power Distribution Network Structure Design Optimization Method for High Bandwidth Memory Interposer
    Lee, Seonghi
    Kim, Hyunwoong
    Song, Kyunghwan
    Kim, Jongwook
    Park, Dongryul
    Ahn, Jangyong
    Kim, Keunwoo
    Ahn, Seungyoung
    IEEE 30TH CONFERENCE ON ELECTRICAL PERFORMANCE OF ELECTRONIC PACKAGING AND SYSTEMS (EPEPS 2021), 2021,