Efficient exploration in reinforcement learning-based cognitive radio spectrum sharing

被引:41
|
作者
Jiang, T. [1 ]
Grace, D. [1 ]
Mitchell, P. D. [1 ]
机构
[1] Univ York, Dept Elect, Commun Res Grp, York YO10 5DD, N Yorkshire, England
关键词
CHANNEL ASSIGNMENT; POWER-CONTROL;
D O I
10.1049/iet-com.2010.0258
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This study introduces two novel approaches, pre-partitioning and weight-driven exploration, to enable an efficient learning process in the context of cognitive radio. Learning efficiency is crucial when applying reinforcement learning to cognitive radio since cognitive radio users will cause a higher level of disturbance in the exploration phase. Careful control of the tradeoff between exploration and exploitation for a learning-enabled cognitive radio in order to efficiently learn from the interactions with a dynamic radio environment is investigated. In the pre-partitioning scheme, the potential action space of cognitive radios is reduced by initially randomly partitioning the spectrum in each cognitive radio. Cognitive radios are therefore able to finish their exploration stage faster than more basic reinforcement learning-based schemes. In the weight-driven exploration scheme, exploitation is merged into exploration by taking into account the knowledge gained in exploration to influence action selection, thereby achieving a more efficient exploration phase. The learning efficiency in a cognitive radio scenario is defined and the learning efficiency of the proposed schemes is investigated. The simulation results show that the exploration of cognitive radio is more efficient by using pre-partitioning and weight-driven exploration and the system performance is improved accordingly.
引用
收藏
页码:1309 / 1317
页数:9
相关论文
共 50 条
  • [1] Reinforcement learning-based spectrum handoff scheme with measured PDR in cognitive radio networks
    Shi, Qianqian
    Shao, Wei
    Fang, Bing
    Zhang, Yan
    Zhang, Yunyang
    ELECTRONICS LETTERS, 2019, 55 (25) : 1368 - +
  • [2] Reinforcement Learning-Based Trust and Reputation Model for Spectrum Leasing in Cognitive Radio Networks
    Ling, Mee Hong
    Yau, Kok-Lim Alvin
    2013 INTERNATIONAL CONFERENCE ON IT CONVERGENCE AND SECURITY (ICITCS), 2013,
  • [3] An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks
    Mustapha, Ibrahim
    Ali, Borhanuddin Mohd
    Rasid, Mohd Fadlee A.
    Sali, Aduwati
    Mohamad, Hafizal
    SENSORS, 2015, 15 (08): : 19783 - 19818
  • [4] Adversarial Learning-Based Spectrum Sensing in Cognitive Radio
    Wang, Chen
    Xu, Yizhen
    Chen, Zhuo
    Tian, Jinfeng
    Cheng, Peng
    Li, Mingqi
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (03) : 498 - 502
  • [5] Learning-Based Spectrum Sensing for Cognitive Radio Systems
    Hassan, Yasmin
    El-Tarhuni, Mohamed
    Assaleh, Khaled
    JOURNAL OF COMPUTER NETWORKS AND COMMUNICATIONS, 2012, 2012
  • [6] Deep Reinforcement Learning-Based RIS-Assisted Cooperative Spectrum Sensing in Cognitive Radio Network
    Xu, Mingdong
    Song, Xiaokai
    Zhao, Yanlong
    Yin, Zhendong
    Wu, Zhilu
    IEICE TRANSACTIONS ON COMMUNICATIONS, 2025, E108B (04) : 404 - 410
  • [7] Deep Learning-Based Spectrum Sensing for Cognitive Radio Applications
    Abdelbaset, Sara E.
    Kasem, Hossam M.
    Khalaf, Ashraf A.
    Hussein, Amr H.
    Kabeel, Ahmed A.
    SENSORS, 2024, 24 (24)
  • [8] Federated Learning-Based Cooperative Spectrum Sensing in Cognitive Radio
    Chen, Zhibo
    Xu, Yi-Qun
    Wang, Hongbin
    Guo, Daoxing
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (02) : 330 - 334
  • [9] Deep learning-based spectrum sensing and modulation categorization for efficient data transmission in cognitive radio
    Vijay, E. Vargil
    Aparna, K.
    PHYSICA SCRIPTA, 2024, 99 (12)
  • [10] Reinforcement Learning-Based Cognitive Radio Transmission Scheduling in Vehicular Systems
    Li, Yun
    Chang, Yuyuan
    Fukawa, Kazuhiko
    Kodama, Naoki
    2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,