Multi-Armed Bandit Learning for Cache Content Placement in Vehicular Social Networks

被引:16
|
作者
Bitaghsir, Saeid Akhavan [1 ]
Dadlani, Aresh [1 ]
Borhani, Muhammad [2 ,3 ]
Khonsari, Ahmad [2 ,3 ]
机构
[1] Nazarbayev Univ, Dept Elect & Comp Engn, Nur Sultan 010000, Kazakhstan
[2] Univ Tehran, Dept Elect & Comp Engn, Tehran 1417466191, Iran
[3] Inst Res Fundamental Sci, Sch Comp Sci, Tehran 1956836681, Iran
关键词
Social networking (online); Upper bound; Resource management; Cellular networks; Base stations; Simulation; Libraries; Vehicular social networks; multi-armed bandit; cache content placement; mobile cache unit; cache hit rate;
D O I
10.1109/LCOMM.2019.2941482
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
In this letter, the efficient dissemination of content in a socially-aware cache-enabled hybrid network using multi-armed bandit learning theory is analyzed. Specifically, an overlay cellular network over a vehicular social network is considered, where commuters request for multimedia content from either the stationary road-side units (RSUs), the base station, or the single mobile cache unit (MCU), if accessible. Firstly, we propose an algorithm to optimally distribute popular contents among the locally deployed RSU caches. To further maximize the cache hits experienced by vehicles, we then present an algorithm to find the best traversal path for the MCU based on commuters' social degree distribution. For performance evaluation, the asymptotic regret upper bounds of the two algorithms are also derived. Simulations reveal that the proposed algorithms outperform existing content placement methods in terms of overall network throughput.
引用
收藏
页码:2321 / 2324
页数:4
相关论文
共 50 条
  • [31] A multi-armed bandit approach for exploring partially observed networks
    Madhawa, Kaushalya
    Murata, Tsuyoshi
    APPLIED NETWORK SCIENCE, 2019, 4 (01)
  • [32] Distributed Learning in Cognitive Radio Networks: Multi-Armed Bandit with Distributed Multiple Players
    Liu, Keqin
    Zhao, Qing
    2010 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2010, : 3010 - 3013
  • [33] Dynamic Multi-Armed Bandit with Covariates
    Pavlidis, Nicos G.
    Tasoulis, Dimitris K.
    Adams, Niall M.
    Hand, David J.
    ECAI 2008, PROCEEDINGS, 2008, 178 : 777 - +
  • [34] Scaling Multi-Armed Bandit Algorithms
    Fouche, Edouard
    Komiyama, Junpei
    Boehm, Klemens
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1449 - 1459
  • [35] The budgeted multi-armed bandit problem
    Madani, O
    Lizotte, DJ
    Greiner, R
    LEARNING THEORY, PROCEEDINGS, 2004, 3120 : 643 - 645
  • [36] The Multi-Armed Bandit With Stochastic Plays
    Lesage-Landry, Antoine
    Taylor, Joshua A.
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2018, 63 (07) : 2280 - 2286
  • [37] Satisficing in Multi-Armed Bandit Problems
    Reverdy, Paul
    Srivastava, Vaibhav
    Leonard, Naomi Ehrich
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (08) : 3788 - 3803
  • [38] Achieving Regular and Fair Learning in Combinatorial Multi-Armed Bandit
    Wu, Xiaoyi
    Li, Bin
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2024, : 361 - 370
  • [39] MULTI-ARMED BANDIT ALLOCATION INDEXES
    JONES, PW
    JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, 1989, 40 (12) : 1158 - 1159
  • [40] Multi-armed Bandit with Additional Observations
    Yun, Donggyu
    Proutiere, Alexandre
    Ahn, Sumyeong
    Shin, Jinwoo
    Yi, Yung
    PROCEEDINGS OF THE ACM ON MEASUREMENT AND ANALYSIS OF COMPUTING SYSTEMS, 2018, 2 (01)