Multi-Armed Bandit Learning for Cache Content Placement in Vehicular Social Networks

被引:16
|
作者
Bitaghsir, Saeid Akhavan [1 ]
Dadlani, Aresh [1 ]
Borhani, Muhammad [2 ,3 ]
Khonsari, Ahmad [2 ,3 ]
机构
[1] Nazarbayev Univ, Dept Elect & Comp Engn, Nur Sultan 010000, Kazakhstan
[2] Univ Tehran, Dept Elect & Comp Engn, Tehran 1417466191, Iran
[3] Inst Res Fundamental Sci, Sch Comp Sci, Tehran 1956836681, Iran
关键词
Social networking (online); Upper bound; Resource management; Cellular networks; Base stations; Simulation; Libraries; Vehicular social networks; multi-armed bandit; cache content placement; mobile cache unit; cache hit rate;
D O I
10.1109/LCOMM.2019.2941482
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
In this letter, the efficient dissemination of content in a socially-aware cache-enabled hybrid network using multi-armed bandit learning theory is analyzed. Specifically, an overlay cellular network over a vehicular social network is considered, where commuters request for multimedia content from either the stationary road-side units (RSUs), the base station, or the single mobile cache unit (MCU), if accessible. Firstly, we propose an algorithm to optimally distribute popular contents among the locally deployed RSU caches. To further maximize the cache hits experienced by vehicles, we then present an algorithm to find the best traversal path for the MCU based on commuters' social degree distribution. For performance evaluation, the asymptotic regret upper bounds of the two algorithms are also derived. Simulations reveal that the proposed algorithms outperform existing content placement methods in terms of overall network throughput.
引用
收藏
页码:2321 / 2324
页数:4
相关论文
共 50 条
  • [41] IMPROVING STRATEGIES FOR THE MULTI-ARMED BANDIT
    POHLENZ, S
    MARKOV PROCESS AND CONTROL THEORY, 1989, 54 : 158 - 163
  • [42] Multi-armed bandit heterogeneous ensemble learning for imbalanced data
    Dai, Qi
    Liu, Jian-wei
    Yang, Jiapeng
    COMPUTATIONAL INTELLIGENCE, 2023, 39 (02) : 344 - 368
  • [43] THE MULTI-ARMED BANDIT PROBLEM WITH COVARIATES
    Perchet, Vianney
    Rigollet, Philippe
    ANNALS OF STATISTICS, 2013, 41 (02): : 693 - 721
  • [44] The Multi-fidelity Multi-armed Bandit
    Kandasamy, Kirthevasan
    Dasarathy, Gautam
    Schneider, Jeff
    Poczos, Barnabas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [45] Multi-armed Bandit with Additional Observations
    Yun D.
    Ahn S.
    Proutiere A.
    Shin J.
    Yi Y.
    2018, Association for Computing Machinery, 2 Penn Plaza, Suite 701, New York, NY 10121-0701, United States (46): : 53 - 55
  • [46] Burst-induced Multi-Armed Bandit for Learning Recommendation
    Alves, Rodrigo
    Ledent, Antoine
    Kloft, Marius
    15TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS 2021), 2021, : 292 - 301
  • [47] Cooperative Multi-player Multi-Armed Bandit: Computation Offloading in a Vehicular Cloud Network
    Xu, Shilin
    Guo, Caili
    Hu, Rose Qingyang
    Qian, Yi
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [48] Multi-Armed Bandit Learning in IoT Networks: Learning Helps Even in Non-stationary Settings
    Bonnefoi, Remi
    Besson, Lilian
    Moy, Christophe
    Kaufmann, Emilie
    Palicot, Jacques
    COGNITIVE RADIO ORIENTED WIRELESS NETWORKS, 2018, 228 : 173 - 185
  • [49] Contextual Multi-Armed Bandit for Cache-Aware Decoupled Multiple Association in UDNs: A Deep Learning Approach
    Dai, Chen
    Zhu, Kun
    Wang, Ran
    Chen, Bing
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2019, 5 (04) : 1046 - 1059
  • [50] Multi-User Communication Networks: A Coordinated Multi-Armed Bandit Approach
    Avner, Orly
    Mannor, Shie
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2019, 27 (06) : 2192 - 2207