Improved duelling deep Q-networks based path planning for intelligent agents

被引:3
|
作者
Lin, Yejin [1 ]
Wen, Jiayi [1 ]
机构
[1] Dalian Maritime Univ, Lab Intelligent Marine Vehicles DMU, Dalian 116033, Peoples R China
关键词
path planning; DQNs; deep Q-networks; reinforcement learning; importance sampling; NEURAL-NETWORKS;
D O I
10.1504/IJVD.2023.131056
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
The natural deep Q-network (DQN) usually requires a long training time because the data usage efficiency is relatively low due to uniform sampling. Importance sampling (IS) can promote important experiences and is more efficient in the neural network training process. In this paper, an efficient learning mechanism using the IS technique is incorporated into duelling DQN algorithm, and is further applied to path planning task for an agent. Different from the traditional DQN algorithm, proposed algorithm improves the sampling efficiency. In this experiment, four target points on the map are deployed to evaluate the loss and the accumulated reward. Simulations and comparisons in various simulation situations demonstrate the effectiveness and superiority of the proposed path planning scheme for an intelligent agent.
引用
收藏
页码:232 / 247
页数:17
相关论文
共 50 条
  • [41] Autonomous UAV Navigation in Dynamic Environments with Double Deep Q-Networks
    Yang, Yupeng
    Zhang, Kai
    Liu, Dahai
    Song, Houbing
    2020 AIAA/IEEE 39TH DIGITAL AVIONICS SYSTEMS CONFERENCE (DASC) PROCEEDINGS, 2020,
  • [42] A3DQN: Adaptive Anderson Acceleration for Deep Q-Networks
    Ermis, Melike
    Yang, Insoon
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 250 - 257
  • [43] CEMDQN: Cognitive-inspired Episodic Memory in Deep Q-networks
    Srivastava, Satyam
    Rathore, Heena
    Tiwari, Kamlesh
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [44] Multi-level deep Q-networks for Bitcoin trading strategies
    Otabek, Sattarov
    Choi, Jaeyoung
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [45] A Power Management Strategy for Parallel PHEV Using Deep Q-Networks
    Song, Changhee
    Lee, Heeyun
    Kim, Kyunghyun
    Cha, Suk Won
    2018 IEEE VEHICLE POWER AND PROPULSION CONFERENCE (VPPC), 2018,
  • [46] Task offloading with enhanced Deep Q-Networks for efficient industrial intelligent video analysis in edge-cloud collaboration
    Ji, Xiaofeng
    Gong, Faming
    Wang, Nuanlai
    Du, Chengze
    Yuan, Xiangbing
    ADVANCED ENGINEERING INFORMATICS, 2024, 62
  • [47] Multi-level deep Q-networks for Bitcoin trading strategies
    Sattarov Otabek
    Jaeyoung Choi
    Scientific Reports, 14
  • [48] Learning How to Drive in a Real World Simulation with Deep Q-Networks
    Wolf, Peter
    Hubschneider, Christian
    Weber, Michael
    Bauer, Andre
    Haertl, Jonathan
    Duerr, Fabian
    Zoellner, J. Marius
    2017 28TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV 2017), 2017, : 244 - 250
  • [49] Scenario-Based Collision Avoidance Control with Deep Q-Networks for Industrial Robot Manipulators
    Sacchi, Nikolas
    Sangiovanni, Bianca
    Incremona, Gian Paolo
    Ferrara, Antonella
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 4388 - 4393
  • [50] Adaptively Scaffolding Cognitive Engagement with Batch Constrained Deep Q-Networks
    Fahid, Fahmid Morshed
    Rowe, Jonathan P.
    Spain, Randall D.
    Goldberg, Benjamin S.
    Pokorny, Robert
    Lester, James
    ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2021), PT I, 2021, 12748 : 113 - 124