Suspension Control Strategies Using Switched Soft Actor-Critic Models or Real Roads

被引:17
|
作者
Yong, Hwanmoo [1 ]
Seo, Joohwan [2 ]
Kim, Jaeyoung [3 ]
Kim, Myounghoe [1 ]
Choi, Jongeun [1 ]
机构
[1] Yonsei Univ, Sch Mech Engn, Seoul 03722, South Korea
[2] Univ Calif Berkeley, Dept Mech Engn, Berkeley, CA 94720 USA
[3] Hyundai Motor Grp, R&D Div, Hwaseong Si 18280, South Korea
基金
新加坡国家研究基金会;
关键词
Deep reinforcement learning (DRL); full-car suspension system; hardware implementation; semiactive suspension; soft actor-critic (SAC); DESIGN;
D O I
10.1109/TIE.2022.3153805
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we propose learning and control strategies for a semiactive suspension system in a full car using soft actor-critic (SAC) models on real roads, where many road profiles with various power of disturbance exist (e.g., speed bumps and general roads). Therefore, a technique that enables deep reinforcement learning to cover different domains with largely different reward functions is proposed. This concept was first realized in a simulation environment. Our proposed switching learning system continuously identifies two different road disturbance profiles in real time such that the appropriately designed SAC model can be learned and applied accordingly. The results of the proposed switching SAC algorithm were compared against those of advanced and conventional benchmark suspension systems. Based on the results, the proposed algorithm showed smaller root-mean-square values of the z-directional acceleration and pitch at the center of the body mass. Finally, we also presented our successfully implemented SAC training system in a real car on real roads. The trained SAC model outperforms conventional controllers reducing the z-directional acceleration and pitch, similar to the simulation results, which is highly related to the riding comfort and vehicle maneuverability.
引用
收藏
页码:824 / 832
页数:9
相关论文
共 50 条
  • [1] Characterizing Motor Control of Mastication With Soft Actor-Critic
    Abdi, Amir H.
    Sagl, Benedikt
    Srungarapu, Venkata P.
    Stavness, Ian
    Prisman, Eitan
    Abolmaesumi, Purang
    Fels, Sidney
    FRONTIERS IN HUMAN NEUROSCIENCE, 2020, 14
  • [2] Generative Adversarial Soft Actor-Critic
    Hwang, Hyo-Seok
    Kim, Yoojoong
    Seok, Junhee
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [3] Soft Actor-Critic With Integer Actions
    Fan, Ting-Han
    Wang, Yubo
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 2611 - 2616
  • [4] Simultaneous Control and Guidance of an AUV Based on Soft Actor-Critic
    Sola, Yoann
    Le Chenadec, Gilles
    Clement, Benoit
    SENSORS, 2022, 22 (16)
  • [5] Stepwise Soft Actor-Critic for UAV Autonomous Flight Control
    Hwang, Ha Jun
    Jang, Jaeyeon
    Choi, Jongkwan
    Bae, Jung Ho
    Kim, Sung Ho
    Kim, Chang Ouk
    DRONES, 2023, 7 (09)
  • [6] ASAC: Active Sensing using Actor-Critic models
    Yoon, Jinsung
    Jordon, James
    van der Schaar, Mihaela
    MACHINE LEARNING FOR HEALTHCARE CONFERENCE, VOL 106, 2019, 106
  • [7] Soft Actor-Critic for Navigation of Mobile Robots
    de Jesus, Junior Costa
    Kich, Victor Augusto
    Kolling, Alisson Henrique
    Grando, Ricardo Bedin
    Cuadros, Marco Antonio de Souza Leite
    Gamarra, Daniel Fernando Tello
    Journal of Intelligent and Robotic Systems: Theory and Applications, 2021, 102 (02):
  • [8] Enhancing Autonomous Driving Navigation Using Soft Actor-Critic
    Elallid, Badr Ben
    Benamar, Nabil
    Bagaa, Miloud
    Hadjadj-Aoul, Yassine
    FUTURE INTERNET, 2024, 16 (07)
  • [9] Explicit coordinated signal control using soft actor-critic for cycle length determination
    Zhang, Kun
    Xu, Hongfeng
    Pan, Baofeng
    Zheng, Qiming
    IET INTELLIGENT TRANSPORT SYSTEMS, 2024, 18 (08) : 1396 - 1407
  • [10] On the Role of Models in Learning Control: Actor-Critic Iterative Learning Control
    Poot, Maurice
    Portegies, Jim
    Oomen, Tom
    IFAC PAPERSONLINE, 2020, 53 (02): : 1450 - 1455