Deploying SDN Control in Internet of UAVs: Q-Learning-Based Edge Scheduling

被引:20
|
作者
Zhang, Chaofeng [1 ]
Dong, Mianxiong [2 ]
Ota, Kaoru [3 ]
机构
[1] Adv Inst Ind Technol, Dept Ind Technol, Tokyo 1400011, Japan
[2] Muroran Inst Technol, Dept Sci & Informat, Muroran, Hokkaido 0508585, Japan
[3] Muroran Inst Technol, Dept Informat & Elect Engn, Muroran, Hokkaido 0508585, Japan
关键词
Data collection; Optimization; Resource management; Throughput; Routing; Cloud computing; Production; Softwarized network; wireless networks and cellar networks; machine learning; Internet of Things services; distributed management; DATA-COLLECTION; RESOURCE-MANAGEMENT; IOT; NETWORK; THINGS;
D O I
10.1109/TNSM.2021.3059159
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, wilderness monitoring provides massive data output for supporting agricultural production, environmental protection, and disaster monitoring. However, smart upgrading alone for these wireless nodes cannot meet the softwarized network needs today, relating to the explosion of multi-dimensional data and multi-species equipment. In this article, we conduct a comprehensive solution for the UAV based data collection strategy in an "air-to-ground" intelligent softwarized collection system. The innovation in this article is that after using the IoT nodes to complete the data collection process through the proposed bandwidth-weighted traffic pushing optimization (BWPTO) algorithm, the system infers the future changes according to the current network state using a deep Q-learning (DQL) network. Then, by developing the proposed AIIPO (Air-to-Ground Intelligent Information Pushing Optimization) algorithm, the entire network can "forward-looking" the uploaded information to potentially idle nodes in the future, thus achieve the optimized system performance. Through the final mathematical experiments, we prove the optimality of our proposed routing algorithm and forwarding strategy, which are more applicable in the dynamic "air-to-ground" distributed data collection system than other benchmark solutions.
引用
收藏
页码:526 / 537
页数:12
相关论文
共 50 条
  • [41] Cost-efficient edge caching and Q-learning-based service selection policies in MEC
    Wu, Menghui
    Guo, Jingjing
    Li, Chunlin
    Luo, Youlong
    WIRELESS NETWORKS, 2023, 29 (01) : 285 - 301
  • [42] Cost-efficient edge caching and Q-learning-based service selection policies in MEC
    Menghui Wu
    Jingjing Guo
    Chunlin Li
    Youlong Luo
    Wireless Networks, 2023, 29 : 285 - 301
  • [43] Q-learning-based Model-free Swing Up Control of an Inverted Pendulum
    Ghio, Alessio
    Ramos, Oscar E.
    PROCEEDINGS OF THE 2019 IEEE XXVI INTERNATIONAL CONFERENCE ON ELECTRONICS, ELECTRICAL ENGINEERING AND COMPUTING (INTERCON), 2019,
  • [44] A Q-learning-based network content caching method
    Chen, Haijun
    Tan, Guanzheng
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2018,
  • [45] A Q-learning-based network content caching method
    Haijun Chen
    Guanzheng Tan
    EURASIP Journal on Wireless Communications and Networking, 2018
  • [46] A Q-learning-based algorithm for the block relocation problem
    Liu, Liqun
    Feng, Yuanjun
    Zeng, Qingcheng
    Chen, Zhijun
    Li, Yaqiu
    JOURNAL OF HEURISTICS, 2025, 31 (01)
  • [47] Q-learning-based multirate transmission control scheme for RRM in multimedia WCDMA systems
    Chen, YS
    Chang, CJ
    Ren, FC
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2004, 53 (01) : 38 - 48
  • [48] Integrated Optimization of Blocking Flowshop Scheduling and Preventive Maintenance Using a Q-Learning-Based Aquila Optimizer
    Ge, Zhenpeng
    Wang, Hongfeng
    SYMMETRY-BASEL, 2023, 15 (08):
  • [49] DTWN: Q-learning-based Transmit Power Control for Digital Twin WiFi Networks
    Cakir L.V.
    Huseynov K.
    Ak E.
    Canberk B.
    EAI. Endorsed. Trans. Ind. Netw. Intell. Syst., 2022, 31
  • [50] Auto-scaling and computation offloading in edge/cloud computing: a fuzzy Q-learning-based approach
    Ma, Xiang
    Zong, Kexuan
    Rezaeipanah, Amin
    WIRELESS NETWORKS, 2024, 30 (02) : 637 - 648