Improved duelling deep Q-networks based path planning for intelligent agents

被引:3
|
作者
Lin, Yejin [1 ]
Wen, Jiayi [1 ]
机构
[1] Dalian Maritime Univ, Lab Intelligent Marine Vehicles DMU, Dalian 116033, Peoples R China
关键词
path planning; DQNs; deep Q-networks; reinforcement learning; importance sampling; NEURAL-NETWORKS;
D O I
10.1504/IJVD.2023.131056
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
The natural deep Q-network (DQN) usually requires a long training time because the data usage efficiency is relatively low due to uniform sampling. Importance sampling (IS) can promote important experiences and is more efficient in the neural network training process. In this paper, an efficient learning mechanism using the IS technique is incorporated into duelling DQN algorithm, and is further applied to path planning task for an agent. Different from the traditional DQN algorithm, proposed algorithm improves the sampling efficiency. In this experiment, four target points on the map are deployed to evaluate the loss and the accumulated reward. Simulations and comparisons in various simulation situations demonstrate the effectiveness and superiority of the proposed path planning scheme for an intelligent agent.
引用
收藏
页码:232 / 247
页数:17
相关论文
共 50 条
  • [31] Path planning for intelligent robots based on deep Q-learning with experience replay and heuristic knowledge
    Jiang, Lan
    Huang, Hongyun
    Ding, Zuohua
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2020, 7 (04) : 1179 - 1189
  • [32] DinoDroid: Testing Android Apps Using Deep Q-Networks
    Zhao, Yu
    Harrison, Brent
    Yu, Tingting
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (05)
  • [33] Optimization methods for improved efficiency and performance of Deep Q-Networks upon conversion to neuromorphic population platforms
    Tan, Weihao
    Kozma, Robert
    Patel, Devdhar
    KNOWLEDGE-BASED SYSTEMS, 2022, 241
  • [34] Path planning for intelligent vehicles based on improved D* Lite
    Li, Xiaomei
    Lu, Ye
    Zhao, Xiaoyu
    Deng, Xiong
    Xie, Zhijiang
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (01): : 1294 - 1330
  • [35] Path planning for intelligent vehicles based on improved D* Lite
    Xiaomei Li
    Ye Lu
    Xiaoyu Zhao
    Xiong Deng
    Zhijiang Xie
    The Journal of Supercomputing, 2024, 80 : 1294 - 1330
  • [36] Reinforcement Learning with an Ensemble of Binary Action Deep Q-Networks
    Hafiz A.M.
    Hassaballah M.
    Alqahtani A.
    Alsubai S.
    Hameed M.A.
    Computer Systems Science and Engineering, 2023, 46 (03): : 2651 - 2666
  • [37] Spatio-Temporal Deep Q-Networks for Human Activity Localization
    Xu, Wanru
    Yu, Jian
    Miao, Zhenjiang
    Wan, Lili
    Ji, Qiang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (09) : 2984 - 2999
  • [38] Algorithmic Trading Using Double Deep Q-Networks and Sentiment Analysis
    Tabaro, Leon
    Kinani, Jean Marie Vianney
    Rosales-Silva, Alberto Jorge
    Salgado-Ramirez, Julio Cesar
    Mujica-Vargas, Dante
    Escamilla-Ambrosio, Ponciano Jorge
    Ramos-Diaz, Eduardo
    INFORMATION, 2024, 15 (08)
  • [39] Wireless Lan Performance Enhancement Using Double Deep Q-Networks
    Asaf, Khizra
    Khan, Bilal
    Kim, Ga-Young
    APPLIED SCIENCES-BASEL, 2022, 12 (09):
  • [40] Double Deep Q-Networks for Optimizing Electricity Cost of a Water Heater
    Amasyali, Kadir
    Kurte, Kuldeep
    Zandi, Helia
    Munk, Jeffrey
    Kotevska, Olivera
    Smith, Robert
    2021 IEEE POWER & ENERGY SOCIETY INNOVATIVE SMART GRID TECHNOLOGIES CONFERENCE (ISGT), 2021,