Analysis of Q-learning on ANNs for Robot Control using Live Video Feed

被引:0
|
作者
Murali, Nihal [1 ]
Gupta, Kunal [1 ]
Bhanot, Surekha [1 ]
机构
[1] BITS Pilani, Dept Elect & Elect Engn, Pilani Campus, Pilani 333031, Rajasthan, India
关键词
Artificial neural networks; Hardware implementation; Q-learning; Raw image inputs; Reinforcement learning; Robot learning;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Training of artificial neural networks (ANNs) using reinforcement learning (RL) techniques is being widely discussed in the robot learning literature. The high model complexity of ANNs along with the model-free nature of RL algorithms provides a desirable combination for many robotics applications. There is a huge need for algorithms that generalize using raw sensory inputs, such as vision, without any hand-engineered features or domain heuristics. In this paper, the standard control problem of line following robot was used as a test-bed, and an ANN controller for the robot was trained on images from a live video feed using Q-learning. A virtual agent was first trained in simulation environment and then deployed onto a robot's hardware. The robot successfully learns to traverse a wide range of curves and displays excellent generalization ability. Qualitative analysis of the evolution of policies, performance and weights of the network provide insights into the nature and convergence of the learning algorithm.
引用
收藏
页码:524 / 529
页数:6
相关论文
共 50 条
  • [31] IoT Enabled Indoor Autonomous Mobile Robot using CNN and Q-Learning
    Saravanan, M.
    Kumar, P. Satheesh
    Sharma, Amit
    2019 IEEE INTERNATIONAL CONFERENCE ON INDUSTRY 4.0, ARTIFICIAL INTELLIGENCE, AND COMMUNICATIONS TECHNOLOGY (IAICT), 2019, : 7 - 13
  • [32] Improving Positioning Accuracy of an Articulated Robot Using Deep Q-Learning Algorithms
    Petronis, Algirdas
    Bucinskas, Vytautas
    Sumanas, Marius
    Dzedzickis, Andrius
    Petrauskas, Liudas
    Sitiajev, Nikita Edgar
    Morkvenaite-Vilkonciene, Inga
    AUTOMATION 2020: TOWARDS INDUSTRY OF THE FUTURE, 2020, 1140 : 257 - 266
  • [33] Building A Socially Acceptable Navigation and Behavior of A Mobile Robot Using Q-Learning
    Dewantara, Bima Sena Bayu
    2016 INTERNATIONAL CONFERENCE ON KNOWLEDGE CREATION AND INTELLIGENT COMPUTING (KCIC), 2016, : 88 - 93
  • [34] Experimental evaluation of new navigator of mobile robot using fuzzy Q-learning
    Lachekhab, Fadhila
    Tadjine, Mohamed
    Kesraoui, Mohamed
    INTERNATIONAL JOURNAL OF ENGINEERING SYSTEMS MODELLING AND SIMULATION, 2019, 11 (02) : 50 - 59
  • [35] Mobile robot Navigation Based on Q-Learning Technique
    Khriji, Lazhar
    Touati, Farid
    Benhmed, Kamel
    Al-Yahmedi, Amur
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2011, 8 (01): : 45 - 51
  • [36] A Hybrid Fuzzy Q-Learning algorithm for robot navigation
    Gordon, Sean W.
    Reyes, Napoleon H.
    Barczak, Andre
    2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 2625 - 2631
  • [37] Accuracy based fuzzy Q-learning for robot behaviours
    Gu, DB
    Hu, HS
    2004 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-3, PROCEEDINGS, 2004, : 1455 - 1460
  • [38] Neural Q-learning in Motion Planning for Mobile Robot
    Qin, Zheng
    Gu, Jason
    2009 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS ( ICAL 2009), VOLS 1-3, 2009, : 1024 - 1028
  • [39] Optimization of industrial robot grasping processes with Q-learning
    Belke, Manuel
    Joeressen, Till
    Petrovic, Oliver
    Brecher, Christian
    2023 5TH INTERNATIONAL CONFERENCE ON CONTROL AND ROBOTICS, ICCR, 2023, : 113 - 119
  • [40] Neural Q-Learning Based Mobile Robot Navigation
    Yun, Soh Chin
    Parasuraman, S.
    Ganapathy, V.
    Joe, Halim Kusuma
    MATERIALS SCIENCE AND INFORMATION TECHNOLOGY, PTS 1-8, 2012, 433-440 : 721 - +