Learning How to Drive in a Real World Simulation with Deep Q-Networks

被引:0
|
作者
Wolf, Peter [1 ]
Hubschneider, Christian [1 ]
Weber, Michael [1 ]
Bauer, Andre [2 ]
Haertl, Jonathan [2 ]
Duerr, Fabian [2 ]
Zoellner, J. Marius [1 ,2 ]
机构
[1] FZI Res Ctr Informat Technol, D-76131 Karlsruhe, Germany
[2] KIT, Karlsruhe, Germany
关键词
ENVIRONMENTS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a reinforcement learning approach using Deep Q-Networks to steer a vehicle in a 3D physics simulation. Relying solely on camera image input the approach directly learns steering the vehicle in an end-to-end manner. The system is able to learn human driving behavior without the need of any labeled training data. An action-based reward function is proposed, which is motivated by a potential use in real world reinforcement learning scenarios. Compared to a naive distance-based reward function, it improves the overall driving behavior of the vehicle agent. The agent is even able to reach comparable to human driving performance on a previously unseen track in our simulation environment.
引用
收藏
页码:244 / 250
页数:7
相关论文
共 50 条
  • [1] Historical Best Q-Networks for Deep Reinforcement Learning
    Yu, Wenwu
    Wang, Rui
    Li, Ruiying
    Gao, Jing
    Hu, Xiaohui
    2018 IEEE 30TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2018, : 6 - 11
  • [2] Deep Abstract Q-Networks
    Roderick, Melrose
    Grimm, Christopher
    Tellex, Stefanie
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18), 2018, : 131 - 138
  • [3] Reinforcement Learning with an Ensemble of Binary Action Deep Q-Networks
    Hafiz A.M.
    Hassaballah M.
    Alqahtani A.
    Alsubai S.
    Hameed M.A.
    Computer Systems Science and Engineering, 2023, 46 (03): : 2651 - 2666
  • [4] Weakly Coupled Deep Q-Networks
    El Shar, Ibrahim
    Jiang, Daniel
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] Episodic Memory Deep Q-Networks
    Lin, Zichuan
    Zhao, Tianqi
    Yang, Guangwen
    Zhang, Lintao
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2433 - 2439
  • [6] Towards Better Interpretability in Deep Q-Networks
    Annasamy, Raghuram Mandyam
    Sycara, Katia
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 4561 - 4569
  • [7] Gradient-Free Deep Q-Networks Reinforcement learning: Benchmark and Evaluation
    Yani, Mohamad
    Ardilla, Fernando
    Saputra, Azhar Aulia
    Kubota, Naoyuki
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [8] Detecting Malicious Websites by using Deep Q-Networks
    Khanh Nguyen
    Park, Younghee
    2024 SILICON VALLEY CYBERSECURITY CONFERENCE, SVCC 2024, 2024,
  • [9] Efficient Exploration through Bayesian Deep Q-Networks
    Azizzadenesheli, Kamyar
    Brunskill, Emma
    Anandkumar, Animashree
    2018 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2018,
  • [10] Agent Decision Processes Using Double Deep Q-Networks plus Minimax Q-Learning
    Fitch, Natalie
    Clancy, Daniel
    2021 IEEE AEROSPACE CONFERENCE (AEROCONF 2021), 2021,