Online Deep Learning Control of an Autonomous Surface Vehicle Using Learned Dynamics

被引:3
|
作者
Peng, Zhouhua [1 ,2 ]
Xia, Fengbei [1 ,2 ]
Liu, Lu [1 ,2 ]
Wang, Dan [1 ,2 ]
Li, Tieshan [3 ]
Peng, Ming [4 ]
机构
[1] Dalian Maritime Univ, Sch Marine Elect Engn, Dalian 116026, Peoples R China
[2] Dalian Key Lab Swarm Control & Elect Technol Intel, Dalian 116026, Peoples R China
[3] Univ Elect Sci & Technol China, Sch Automat Engn, Chengdu 611731, Peoples R China
[4] Jiangsu Automat Res Inst, Lianyungang 222061, Jiangsu, Peoples R China
来源
基金
国家重点研发计划;
关键词
Deep learning; Trajectory tracking; Vehicle dynamics; Artificial neural networks; Task analysis; Data models; Predictive models; Deep learning control; deep neural network; extended state observer; autonomous surface vehicle; anti-disturbance control; NETWORKS; TRACKING;
D O I
10.1109/TIV.2023.3333437
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Realtime model learning is a challenging task for autonomous surface vehicles (ASVs) sailing in a variable sea environment. Deep learning based on deep neural networks (DNNs) takes advantage of high representative capabilities. However, it is difficult to achieve stable learning control performance due to modeling errors or model bias. On the other hand, extended state observer (ESO) takes advantage of fast reconstructing unknown disturbances. In this paper, an online deep learning control method is presented for an ASV to achieve trajectory tracking. Specifically, a general DNN is constructed at first to learn the unknown ASV dynamics online with the collected data one by one at each time to improve scalability. Then, an ESO is designed to estimate the modeling errors of the DNN for improving the model learning accuracy further. Finally, a stable online deep learning trajectory tracking control law is designed based on the learned ASV dynamics from the DNN and the reconstructed modeling errors from the ESO. By using the cascade theory, it is proven that the closed-loop trajectory tracking control system is input-to-state stable and all signals are uniformly ultimately bounded. Simulation results of the circular trajectory tracking show that the proposed method improves the transient tracking performance compared with the DNN-based and ESO-based control methods. Moreover, an "8-type" trajectory tracking simulation is further provided to demonstrate the generalization capabilities of the proposed method for new trajectories and new environments.
引用
收藏
页码:3283 / 3292
页数:10
相关论文
共 50 条
  • [31] Surface path tracking method of autonomous surface underwater vehicle based on deep reinforcement learning
    Song, Dalei
    Gan, Wenhao
    Yao, Peng
    Zang, Wenchuan
    Qu, Xiuqing
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (08): : 6225 - 6245
  • [32] Online System Identification of the dynamics of an Autonomous Underwater Vehicle
    Hong, Eng You
    Meng, Teo Kwong
    Chitre, Mandar
    2013 IEEE INTERNATIONAL UNDERWATER TECHNOLOGY SYMPOSIUM (UT), 2013,
  • [33] Controlling an Autonomous Vehicle with Deep Reinforcement Learning
    Folkers, Andreas
    Rick, Matthias
    Bueskens, Christof
    2019 30TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV19), 2019, : 2025 - 2031
  • [34] Autonomous Vehicle Control Using a Deep Neural Network and Jetson Nano
    Febbo, Rocco
    Flood, Brendan
    Halloy, Julian
    Lau, Patrick
    Wong, Kwai
    Ayala, Alan
    PRACTICE AND EXPERIENCE IN ADVANCED RESEARCH COMPUTING 2020, PEARC 2020, 2020, : 333 - 338
  • [35] Dynamics-Aligned Transfer Reinforcement Learning For Autonomous Underwater Vehicle Control
    Cheng, Kai
    Lu, Wenjie
    Xiong, Hao
    Liu, Honghai
    2022 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2022), 2022, : 1040 - 1045
  • [36] Hierarchical speed control for autonomous electric vehicle through deep reinforcement learning and robust control
    Xu, Guangfei
    He, Xiangkun
    Chen, Meizhou
    Miao, Hequan
    Pang, Huanxiao
    Wu, Jian
    Diao, Peisong
    Wang, Wenjun
    IET CONTROL THEORY AND APPLICATIONS, 2022, 16 (01): : 112 - 124
  • [37] Autonomous Navigation and Control of a Quadrotor Using Deep Reinforcement Learning
    Mokhtar, Mohamed
    El-Badawy, Ayman
    2023 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS, ICUAS, 2023, : 1045 - 1052
  • [38] Steering control in autonomous vehicles using deep reinforcement learning
    Xue Chong
    Zhang Xinyu
    Jia Peng
    The Journal of China Universities of Posts and Telecommunications, 2018, 25 (06) : 58 - 64
  • [39] Steering control in autonomous vehicles using deep reinforcement learning
    Chong X.
    Peng J.
    Xinyu Z.
    Peng, Jia (jiapeng1018@163.com), 2018, Beijing University of Posts and Telecommunications (25): : 58 - 64
  • [40] Intelligent Locking System using Deep Learning for Autonomous Vehicle in Internet of Things
    Zaleha, S. H.
    Ithnin, Nora
    Wahab, Nur Haliza Abdul
    Sunar, Noorhazirah
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2021, 12 (10) : 565 - 578