Multi-Agent Double Deep Q-Learning for Fairness in Multiple-Access Underlay Cognitive Radio Networks

被引:0
|
作者
Ali, Zain [1 ]
Rezki, Zouheir [1 ]
Sadjadpour, Hamid [1 ]
机构
[1] Electrical and Computer Engineering Department, Baskin School of Engineering, University of California at Santa Cruz, Santa Cruz,CA,95064, United States
来源
IEEE Transactions on Machine Learning in Communications and Networking | 2024年 / 2卷
关键词
Cognitive systems - Game theory - Information management - Iterative methods - Multi agent systems - Optimal systems - Radio interference - Radio systems - Radio transmission - Reinforcement learning - Resource allocation - Spectrum efficiency;
D O I
10.1109/TMLCN.2024.3391216
中图分类号
学科分类号
摘要
Underlay Cognitive Radio (CR) systems were introduced to resolve the issue of spectrum scarcity in wireless communication. In CR systems, an unlicensed Secondary Transmitter (ST) shares the channel with a licensed Primary Transmitter (PT). Spectral efficiency of the CR systems can be further increased if multiple STs share the same channel. In underlay CR systems, the STs are required to keep interference at a low level to avoid outage at the primary system. The restriction on interference in underlay CR prevents some STs from transmitting while other STs may achieve high data rates, thus making the underlay CR network unfair. In this work, we consider the problem of achieving fairness in the rates of the STs. The considered optimization problem is non-convex in nature. The conventional iteration-based optimizers are time-consuming and may not converge when the considered problem is non-convex. To deal with the problem, we propose a deep-Q reinforcement learning (DQ-RL) framework that employs two separate deep neural networks for the computation and estimation of the Q-values which provides a fast solution and is robust to channel dynamic. The proposed technique achieves near optimal values of fairness while offering primary outage probability of less than 4%. Further, increasing the number of STs results in a linear increase in the computational complexity of the proposed framework. A comparison of several variants of the proposed scheme with the optimal solution is also presented. Finally, we present a novel cumulative reward framework and discuss how the combined-reward approach improves the performance of the communication system. ©2024 The Authors.
引用
收藏
页码:580 / 595
相关论文
共 50 条
  • [1] Deep-Q Reinforcement Learning for Fairness in Multiple-Access Cognitive Radio Networks
    Ali, Zain
    Rezki, Zouheir
    Sadjadpour, Hamid
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 2023 - 2028
  • [2] Multi-agent Q-learning of Spectrum Access in Distributed Cognitive Radio Network
    Min Neng
    Wu Qi-hui
    Xu Yu-hua
    Ding Guo-ru
    INTERNATIONAL CONFERENCE OF CHINA COMMUNICATION (ICCC2010), 2010, : 656 - 660
  • [3] Multi-Agent Double Deep Q-Learning for Beamforming in mmWave MIMO Networks
    Wang, Xueyuan
    Gursoy, M. Cenk
    2020 IEEE 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (IEEE PIMRC), 2020,
  • [4] Distributed dynamic spectrum access through multi-agent deep recurrent Q-learning in cognitive radio network
    Giri, Manish Kumar
    Majumder, Saikat
    PHYSICAL COMMUNICATION, 2023, 58
  • [5] Intelligent Dynamic Spectrum Access for Uplink Underlay Cognitive Radio Networks Based on Q-Learning
    Zhang, Jingjing
    Dong, Anming
    Yu, Jiguo
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, PT I, 2020, 12384 : 691 - 703
  • [6] Regularized Softmax Deep Multi-Agent Q-Learning
    Pan, Ling
    Rashid, Tabish
    Peng, Bei
    Huang, Longbo
    Whiteson, Shimon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [7] Resource Allocation for Multi-user Cognitive Radio Systems using Multi-agent Q-Learning
    Azzouna, Ahmed
    Guezmil, Amel
    Sakly, Anis
    Mtibaa, Abdellatif
    ANT 2012 AND MOBIWIS 2012, 2012, 10 : 46 - 53
  • [8] Modular Production Control with Multi-Agent Deep Q-Learning
    Gankin, Dennis
    Mayer, Sebastian
    Zinn, Jonas
    Vogel-Heuser, Birgit
    Endisch, Christian
    2021 26TH IEEE INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2021,
  • [9] Q-learning in Multi-Agent Cooperation
    Hwang, Kao-Shing
    Chen, Yu-Jen
    Lin, Tzung-Feng
    2008 IEEE WORKSHOP ON ADVANCED ROBOTICS AND ITS SOCIAL IMPACTS, 2008, : 239 - 244
  • [10] Multi-Agent Advisor Q-Learning
    Subramanian S.G.
    Taylor M.E.
    Larson K.
    Crowley M.
    Journal of Artificial Intelligence Research, 2022, 74 : 1 - 74