DQ Scheduler: Deep Reinforcement Learning Based Controller Synchronization in Distributed SDN

被引:0
|
作者
Zhang, Ziyao [1 ]
Ma, Liang [2 ]
Poularakis, Konstantinos [3 ]
Leung, Kin K. [1 ]
Wu, Lingfei [2 ]
机构
[1] Imperial Coll London, London, England
[2] IBM TJ Watson Res Ctr, Yorktown Hts, NY USA
[3] Yale Univ, New Haven, CT USA
关键词
NETWORKS;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In distributed software-defined networks (SDN),multiple physical SDN controllers, each managing a network domain, are implemented to balance centralized control, scalability and reliability requirements. In such networking paradigm, controllers synchronize with each other to maintain a logically centralized network view. Despite various proposals of distributed SDN controller architectures, most existing works only assume that such logically centralized network view can be achieved with some synchronization designs, but the question of how exactly controllers should synchronize with each other to maximize the benefits of synchronization under the eventual consistency assumptions is largely overlooked. To this end,we formulate the controller synchronization problem as a Markov Decision Process(MDP) and apply reinforcement learning techniques combined with deep neural network to train a smart controller synchronization policy,which we call the Deep-Q (DQ) Scheduler. Evaluation results show that DQ Scheduler outperforms the anti-entropy algorithm implemented in the ONOS controller by up to 95.2% for inter-domain routing tasks.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Robot Obstacle Avoidance Controller Based on Deep Reinforcement Learning
    Tang, Yaokun
    Chen, Qingyu
    Wei, Yuxin
    JOURNAL OF SENSORS, 2022, 2022
  • [32] Deep Reinforcement Learning Based Controller for Active Heave Compensation
    Zinage, Shrenik
    Somayajula, Abhilash
    IFAC PAPERSONLINE, 2021, 54 (16): : 161 - 167
  • [33] Robot Obstacle Avoidance Controller Based on Deep Reinforcement Learning
    Tang, Yaokun
    Chen, Qingyu
    Wei, Yuxin
    Journal of Sensors, 2022, 2022
  • [34] An SDN Controller-Based Network Slicing Scheme Using Constrained Reinforcement Learning
    Hlophe, Mduduzi C. C.
    Maharaj, Bodhaswar T.
    IEEE ACCESS, 2022, 10 : 134848 - 134869
  • [35] Distributed Reinforcement Learning Based Optimal Controller For Mobile Robot Formation
    Shinde, Chinmay
    Das, Kaushik
    Kumar, Swagat
    Behera, Laxmidhar
    2018 EUROPEAN CONTROL CONFERENCE (ECC), 2018, : 2800 - 2805
  • [36] RL-Routing: An SDN Routing Algorithm Based on Deep Reinforcement Learning
    Chen, Yi-Ren
    Rezapour, Amir
    Tzeng, Wen-Guey
    Tsai, Shi-Chun
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2020, 7 (04): : 3185 - 3199
  • [37] DRS: A deep reinforcement learning enhanced Kubernetes scheduler for microservice-based system
    Jian, Zhaolong
    Xie, Xueshuo
    Fang, Yaozheng
    Jiang, Yibing
    Lu, Ye
    Dash, Ankan
    Li, Tao
    Wang, Guiling
    SOFTWARE-PRACTICE & EXPERIENCE, 2024, 54 (10): : 2102 - 2126
  • [38] Precise and Adaptable: Leveraging Deep Reinforcement Learning for GAP-based Multipath Scheduler
    Liao, Binbin
    Zhang, Guangxing
    Diao, Zulong
    Xie, Gaogang
    2020 IFIP NETWORKING CONFERENCE AND WORKSHOPS (NETWORKING), 2020, : 154 - 162
  • [39] Multi-Path Routing Algorithm Based on Deep Reinforcement Learning for SDN
    Zhang, Yi
    Qiu, Lanxin
    Xu, Yangzhou
    Wang, Xinjia
    Wang, Shengjie
    Paul, Agyemang
    Wu, Zhefu
    APPLIED SCIENCES-BASEL, 2023, 13 (22):
  • [40] Learning the Optimal Synchronization Rates in Distributed SDN Control Architectures
    Poularakis, Konstantinos
    Qin, Qiaofeng
    Ma, Liang
    Kompella, Sastry
    Leung, Kin K.
    Tassiulas, Leandros
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2019), 2019, : 1099 - 1107