SLAYO-RL: A Target-Driven Deep Reinforcement Learning Approach with SLAM and YoLo for an Enhanced Autonomous Agent

被引:0
|
作者
Montes, Jose [1 ]
Kohwalter, Troy Costa [1 ]
Clua, Esteban [1 ]
机构
[1] Univ Fed Fluminense, Niteroi, RJ, Brazil
关键词
SLAM; YoLo; Deep Reinforcement Learning;
D O I
10.1109/LARS/SBR/WRE59448.2023.10332988
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This article presents an innovative approach for training an agent to reach a specific and predetermined target in an unknown environment. It uses reinforcement learning for an agent with a Lidar sensor and a camera. Given the difficulty of using raw high-dimensional information to train any reinforcement learning agent, the Lidar sensor data was processed using Simultaneous Localization and Mapping to provide the agent's location in space. To identify the agent's target of interest, the camera image was processed using the YoLo object detection model to provide the coordinates of the target in the image. In addition to processing the agent's state, the two technologies were used as a composition of the reward obtained by the agent, causing it to develop the behavior of exploring an unknown environment and, after locating the target, moving towards it until the agent collides with the target. The proposed approach differs from the state of the art because it unites the two technologies as a composition of the agent's state and reward.
引用
收藏
页码:296 / 301
页数:6
相关论文
共 17 条
  • [1] Target-Driven Autonomous Robot Exploration in Mappless Indoor Environments Through Deep Reinforcement Learning
    Shuai, Wenxuan
    Huang, Mengxing
    Wu, Di
    Cao, Gang
    Feng, Zikai
    ARTIFICIAL INTELLIGENCE AND ROBOTICS, ISAIR 2022, PT II, 2022, 1701 : 341 - 351
  • [2] Towards Generalization in Target-Driven Visual Navigation by Using Deep Reinforcement Learning
    Devo, Alessandro
    Mezzetti, Giacomo
    Costante, Gabriele
    Fravolini, Mario L.
    Valigi, Paolo
    IEEE TRANSACTIONS ON ROBOTICS, 2020, 36 (05) : 1546 - 1561
  • [3] Research on Target-Driven Navigation of Mobile Robot Based on Deep Reinforcement Learning and Preprocessing Layer
    Yu, Gang
    Zhang, Chang
    Xu, Jiexiong
    Sun, Tongda
    5TH ANNUAL INTERNATIONAL CONFERENCE ON INFORMATION SYSTEM AND ARTIFICIAL INTELLIGENCE (ISAI2020), 2020, 1575
  • [4] Visual Target-Driven Robot Crowd Navigation with Limited FOV Using Self-Attention Enhanced Deep Reinforcement Learning
    Li, Yinbei
    Lyu, Qingyang
    Yang, Jiaqiang
    Salam, Yasir
    Wang, Baixiang
    SENSORS, 2025, 25 (03)
  • [5] Multiple Self-Supervised Auxiliary Tasks for Target-Driven Visual Navigation Using Deep Reinforcement Learning
    Zhang, Wenzhi
    He, Li
    Wang, Hongwei
    Yuan, Liang
    Xiao, Wendong
    ENTROPY, 2023, 25 (07)
  • [6] Autonomous Agent for Beyond Visual Range Air Combat: A Deep Reinforcement Learning Approach
    Dantas, Joao P. A.
    Maximo, Marcos R. O. A.
    Yoneyama, Takashi
    PROCEEDINGS OF THE 2023 ACM SIGSIM INTERNATIONAL CONFERENCE ON PRINCIPLES OF ADVANCED DISCRETE SIMULATION, ACMSIGSIM-PADS 2023, 2023, : 48 - 49
  • [7] A Data-Driven Multi-Agent Autonomous Voltage Control Framework Using Deep Reinforcement Learning
    Wang, Shengyi
    Duan, Jiajun
    Shi, Di
    Xu, Chunlei
    Li, Haifeng
    Diao, Ruisheng
    Wang, Zhiwei
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2020, 35 (06) : 4644 - 4654
  • [8] Autonomous Conflict Resolution in Urban Air Mobility: A Deep Multi-Agent Reinforcement Learning Approach
    Deniz, Sabrullah
    Wang, Zhenbo
    AIAA AVIATION FORUM AND ASCEND 2024, 2024,
  • [9] An Intelligent Navigation Control Approach for Autonomous Unmanned Vehicles via Deep Learning-Enhanced Visual SLAM Framework
    Chen, Lu
    Liu, Yapeng
    Dong, Panpan
    Liang, Jianwei
    Wang, Aibing
    IEEE ACCESS, 2023, 11 : 119067 - 119077
  • [10] Centroid-Guided Target-Driven Topology Control Method for UAV Ad-Hoc Networks Based on Tiny Deep Reinforcement Learning Algorithm
    Li, Jiaxin
    Yi, Peng
    Duan, Tong
    Wang, Yawen
    Zhang, Zhen
    Yu, Jing
    Hu, Tao
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (12): : 21083 - 21091