Learning How to Drive in a Real World Simulation with Deep Q-Networks

被引:0
|
作者
Wolf, Peter [1 ]
Hubschneider, Christian [1 ]
Weber, Michael [1 ]
Bauer, Andre [2 ]
Haertl, Jonathan [2 ]
Duerr, Fabian [2 ]
Zoellner, J. Marius [1 ,2 ]
机构
[1] FZI Res Ctr Informat Technol, D-76131 Karlsruhe, Germany
[2] KIT, Karlsruhe, Germany
关键词
ENVIRONMENTS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a reinforcement learning approach using Deep Q-Networks to steer a vehicle in a 3D physics simulation. Relying solely on camera image input the approach directly learns steering the vehicle in an end-to-end manner. The system is able to learn human driving behavior without the need of any labeled training data. An action-based reward function is proposed, which is motivated by a potential use in real world reinforcement learning scenarios. Compared to a naive distance-based reward function, it improves the overall driving behavior of the vehicle agent. The agent is even able to reach comparable to human driving performance on a previously unseen track in our simulation environment.
引用
收藏
页码:244 / 250
页数:7
相关论文
共 50 条
  • [41] Recognition of Hand Gestures Based on EMG Signals with Deep and Double-Deep Q-Networks
    Caraguay, Angel Leonardo Valdivieso
    Vasconez, Juan Pablo
    Lopez, Lorena Isabel Barona
    Benalcazar, Marco E.
    SENSORS, 2023, 23 (08)
  • [42] IMPROVED SAMPLE EFFICIENCY by EPISODIC MEMORY HIT RATIO DEEP Q-NETWORKS
    Zhang R.
    Zhu X.
    Zhu W.
    Journal of Applied and Numerical Optimization, 2021, 3 (03): : 513 - 519
  • [43] Dueling deep Q-networks for social awareness-aided spectrum sharing
    Wang, Yonghua
    Li, Xueyang
    Wan, Pin
    Chang, Le
    Deng, Xia
    COMPLEX & INTELLIGENT SYSTEMS, 2022, 8 (03) : 1975 - 1986
  • [44] Improved duelling deep Q-networks based path planning for intelligent agents
    Lin, Yejin
    Wen, Jiayi
    INTERNATIONAL JOURNAL OF VEHICLE DESIGN, 2023, 91 (1-3) : 232 - 247
  • [45] Dealing with Partial Observations in Dynamic Spectrum Access: Deep Recurrent Q-Networks
    Xu, Y.
    Yu, J.
    Buehrer, R. M.
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 877 - 882
  • [46] Dueling deep Q-networks for social awareness-aided spectrum sharing
    Yonghua Wang
    Xueyang Li
    Pin Wan
    Le Chang
    Xia Deng
    Complex & Intelligent Systems, 2022, 8 : 1975 - 1986
  • [47] FPGA-based Acceleration of Deep Q-Networks with STANN-RL
    Rothmann, Marc
    Porrmann, Mario
    2024 9TH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING, FMEC 2024, 2024, : 99 - 106
  • [48] Maximum Power Point Tracking of Photovoltaic Systems Using Deep Q-networks
    Wang, Kangshi
    Hong, Dou
    Ma, Jieming
    Man, Ka Lok
    Huang, Kaizhu
    Huang, Xiaowei
    2020 IEEE 18TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN), VOL 1, 2020, : 100 - 103
  • [49] Deep reinforcement learning for optimal life-cycle management of deteriorating regional bridges using double-deep Q-networks
    Lei, Xiaoming
    Dong, You
    SMART STRUCTURES AND SYSTEMS, 2022, 30 (06) : 571 - 582
  • [50] Strategy and Benchmark for Converting Deep Q-Networks to Event-Driven Spiking Neural Networks
    Tan, Weihao
    Patel, Devdhar
    Kozma, Robert
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9816 - 9824