A Safe Training Approach for Deep Reinforcement Learning-based Traffic Engineering

被引:0
|
作者
Wang, Linghao [1 ,2 ]
Wang, Miao [1 ]
Zhang, Yujun [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Nanjing Inst Informat Superbahn, Nanjing, Peoples R China
基金
美国国家科学基金会;
关键词
Traffic Engineering; Safe Reinforcement Learning; Learning from Demonstration;
D O I
10.1109/ICC45855.2022.9838944
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Traffic engineering (TE) is fundamental and important in modern communication networks. Deep reinforcement learning (DRL)-based TE solutions can solve TE in a data-driven and model-free way thus have attracted much attention recently. However, most of these solutions ignore that TE is a real-world application and there are challenges applying DRL to real-world TE like: (1) Efficiency. Existing learning-from-scratch DRL agent needs long-time interactions to find solutions better than traditional methods. (2) Safety. Existing DRL-based solutions make TE decisions without considering safety constraints, poor decisions may be made and cause significant performance degradation. In this paper, we propose a safe training approach for DRL-based TE, which tries to address the above two problems. It focuses on making full use of data and ensuring safety so that DRL agent for TE can learn more quickly and possibly poor decisions will not be applied to real environment. We implemented the proposed method in ns-3 and simulation results show that our method performs better with faster convergence rate compared to other DRL-based methods while ensuring the safety of the performed TE decisions.
引用
收藏
页码:1450 / 1455
页数:6
相关论文
共 50 条
  • [31] From Local to Global: A Curriculum Learning Approach for Reinforcement Learning-based Traffic Signal Control
    Zheng, Nianzhao
    Li, Jialong
    Mao, Zhenyu
    Tei, Kenji
    2022 2ND IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND ARTIFICIAL INTELLIGENCE (SEAI 2022), 2022, : 253 - 258
  • [32] DEEP REINFORCEMENT LEARNING-BASED IRRIGATION SCHEDULING
    Yang, Y.
    Hu, J.
    Porter, D.
    Marek, T.
    Heflin, K.
    Kong, H.
    Sun, L.
    TRANSACTIONS OF THE ASABE, 2020, 63 (03) : 549 - 556
  • [33] On Deep Reinforcement Learning for Traffic Engineering in SD-WAN
    Troia, Sebastian
    Sapienza, Federico
    Vare, Leonardo
    Maier, Guido
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (07) : 2198 - 2212
  • [34] A Deep Reinforcement Learning Approach to Traffic Signal Control
    Razack, Aquib Junaid
    Ajith, Vysyakh
    Gupta, Rajiv
    2021 IEEE CONFERENCE ON TECHNOLOGIES FOR SUSTAINABILITY (SUSTECH2021), 2021,
  • [35] Reinforcement Learning-Based Cooperative Traffic Control System
    Barta, Zoltan
    Kovacs, Szilard
    Botzheim, Janos
    COMPUTATIONAL COLLECTIVE INTELLIGENCE, PT II, ICCCI 2024, 2024, 14811 : 176 - 188
  • [36] A Deep Reinforcement Learning Approach for Ramp Metering Based on Traffic Video Data
    Liu, Bing
    Tang, Yu
    Ji, Yuxiong
    Shen, Yu
    Du, Yuchuan
    JOURNAL OF ADVANCED TRANSPORTATION, 2021, 2021
  • [37] Safe Deep Reinforcement Learning-Based Constrained Optimal Control Scheme for HEV Energy Management
    Liu, Zemin Eitan
    Zhou, Quan
    Li, Yanfei
    Shuai, Shijin
    Xu, Hongming
    IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, 2023, 9 (03): : 4278 - 4293
  • [38] Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks
    Kou, Peng
    Liang, Deliang
    Wang, Chen
    Wu, Zihao
    Gao, Lin
    APPLIED ENERGY, 2020, 264 (264)
  • [39] Rescue path planning for urban flood: A deep reinforcement learning-based approach
    Li, Xiao-Yan
    Wang, Xia
    RISK ANALYSIS, 2024,
  • [40] Deep reinforcement learning-based approach for rumor influence minimization in social networks
    Jiang, Jiajian
    Chen, Xiaoliang
    Huang, Zexia
    Li, Xianyong
    Du, Yajun
    APPLIED INTELLIGENCE, 2023, 53 (17) : 20293 - 20310