Improving Learning in a Mobile Robot using Adversarial Training

被引:0
|
作者
Flyr, Todd W. [1 ]
Parsons, Simon [1 ,2 ]
机构
[1] CUNY, Grad Ctr, Dept Comp Sci, New York, NY 10017 USA
[2] Univ Lincoln, Sch Comp Sci, Lincoln, England
关键词
Mobile Robotics; GANs; Adversarial Training; Machine Learning;
D O I
10.5220/0010107100820089
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper reports research training a mobile robot to carry out a simple task. Specifically, we report on experiments in learning to strike a ball to hit a target on the ground. We trained a neural network to control a robot to carry out this task with data from a small number of trials with a physical robot. We compare the results of using this neural network with that of using a neural-network trained with this dataset plus the output of a generative adversarial network (GAN) trained on the same data. We find that the neural network that uses the GAN-generated data provides better performance. This relationship holds as we present the robot with generalized versions of this task. We also find that we can produce acceptable results with an exceptionally small initial dataset. We propose that this is a possible way to solve the "big data" problem, where training a neural network to learn physical tasks requires a large corpus of labeled trial data that can be difficult to obtain.
引用
收藏
页码:82 / 89
页数:8
相关论文
共 50 条
  • [41] Training a vision guided mobile robot
    Wyeth, G
    MACHINE LEARNING, 1998, 31 (1-3) : 201 - 222
  • [42] Obstacle avoidance of a mobile robot using hybrid learning approach
    Er, MJ
    Deng, C
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2005, 52 (03) : 898 - 905
  • [43] Adversarial Learning-Enabled Automatic WiFi Indoor Radio Map Construction and Adaptation With Mobile Robot
    Zou, Han
    Chen, Chun-Lin
    Li, Maoxun
    Yang, Jianfei
    Zhou, Yuxun
    Xie, Lihua
    Spanos, Costas J.
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (08) : 6946 - 6954
  • [44] Autonomous Exploration for Mobile Robot using Q-learning
    Liu, Yang
    Liu, Huaping
    Wang, Bowen
    2017 2ND INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM), 2017, : 614 - 619
  • [45] Using the GTSOM Network for Mobile Robot Navigation with Reinforcement Learning
    Menegaz, Mauricio
    Engel, Paulo M.
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 716 - 720
  • [46] Mobile robot navigation using neural Q-learning
    Yang, GS
    Chen, EK
    An, CW
    PROCEEDINGS OF THE 2004 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2004, : 48 - 52
  • [47] Mobile Robot Environment Learning and Localization Using Active Perception
    Lameski, Petre
    Kulakov, Andrea
    ICT INNOVATIONS 2010, 2011, 83 : 236 - +
  • [48] FASTER AUTONOMOUS MOBILE ROBOT LEARNING USING DECISION CORRECTION
    Narvydas, Gintautas
    Razanskas, Petras
    ECT 2009: ELECTRICAL AND CONTROL TECHNOLOGIES, 2009, : 44 - 47
  • [49] Brain Teleoperation of a Mobile Robot Using Deep Learning Technique
    Yuan, Yuxia
    Li, Zhijun
    Liu, Yiliang
    2018 3RD IEEE INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (IEEE ICARM), 2018, : 54 - 59
  • [50] Skills' learning in an autonomous mobile robot using continuous reinforcement
    Boada, MJL
    Salichs, MA
    ADVANCED FUZZY-NEURAL CONTROL 2001, 2002, : 117 - 122