Maximizing Profits of Allocating Limited Resources under Stochastic User Demands

被引:0
|
作者
Shi, Bing [1 ,2 ]
Li, Bingzhen [1 ]
机构
[1] Wuhan Univ Technol, Sch Comp Sci & Technol, Wuhan, Peoples R China
[2] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
关键词
Resource Allocation; Stochastic Demands; Markov Decision Process; Q-learning; Q-DP Algorithm;
D O I
10.1109/ICPADS47876.2019.00020
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, cloud brokers play an important role for allocating resources in the cloud computing market, which mediate between cloud users and service providers by buying a limited capacity from the providers and subleasing them to the users to make profits. However, the user demands are usually stochastic and the resource capacity bought from cloud providers is limited. Therefore, in order to maximize the profits, the broker needs an effective resource allocation algorithm to decide whether satisfying the demands of arriving users or not, i.e. need to allocate the resource to a valuable user. In this paper, we propose a resource allocation algorithm named Q-DP, which is based on reinforcement learning and dynamic programming, for the broker to maximize the profits. First, we consider all arriving users' demands at each stage as a bundle, and model the process of the broker allocating resources to all arriving users as a Markov Decision Process. We then use the Q-learning algorithm to determine how much resources will be allocated to the bundle of users arriving at the current stage. Next, we use dynamic programming to decide which cloud user will obtain the resources. Finally, we run experiments in the artificial dataset and realistic dataset respectively to evaluate our resource allocation algorithm against other typical resource allocation algorithms, and show that our algorithm can beat other algorithms, especially in the setting of the broker having extremely limited resources.
引用
收藏
页码:85 / 92
页数:8
相关论文
共 50 条
  • [21] Goal programming model for optimal water allocation of limited resources under increasing demands
    Ammar Ahmed Musa
    Environment, Development and Sustainability, 2021, 23 : 5956 - 5984
  • [22] Optimal allocation of limited and random network resources to discrete stochastic demands for standardized cargo transportation networks
    Wang, Xinchang
    TRANSPORTATION RESEARCH PART B-METHODOLOGICAL, 2016, 91 : 310 - 331
  • [23] Maximizing throughput for traffic grooming with limited grooming resources
    Wang, Yong
    Gu, Qian-Ping
    GLOBECOM 2007: 2007 IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE, VOLS 1-11, 2007, : 2337 - 2341
  • [24] A new perspective on classification: Optimally allocating limited resources to uncertain tasks
    Vanderschueren, Toon
    Baesens, Bart
    Verdonck, Tim
    Verbeke, Wouter
    DECISION SUPPORT SYSTEMS, 2024, 179
  • [25] Allocating surveillance resources to reduce ecological invasions: maximizing detections and information about the threat
    Robinson, Andrew
    Burgman, Mark A.
    Cannon, Rob
    ECOLOGICAL APPLICATIONS, 2011, 21 (04) : 1410 - 1417
  • [26] ALLOCATING CONSERVATION RESOURCES UNDER THE ENDANGERED SPECIES ACT
    Langpap, Christian
    Kerkvliet, Joe
    AMERICAN JOURNAL OF AGRICULTURAL ECONOMICS, 2010, 92 (01) : 110 - 124
  • [27] Allocating Resources for Workflows Running under Authorization Control
    He, Ligang
    Chaudhary, Nadeem
    Jarvis, Stephen A.
    Li, Kenli
    2012 ACM/IEEE 13TH INTERNATIONAL CONFERENCE ON GRID COMPUTING (GRID), 2012, : 58 - 65
  • [28] CAPACITY EXPANSION UNDER STOCHASTIC DEMANDS
    BEAN, JC
    HIGLE, JL
    SMITH, RL
    OPERATIONS RESEARCH, 1992, 40 : S210 - S216
  • [29] Maximizing customers' lifetime value using limited marketing resources
    Marmol, Mage
    Goyal, Anita
    Copado-Mendez, Pedro Jesus
    Panadero, Javier
    Juan, Angel A.
    MARKETING INTELLIGENCE & PLANNING, 2021, 39 (08) : 1058 - 1072
  • [30] Rendezvous search on the line with limited resources: Maximizing the probability of meeting
    Alpern, Steve
    Beck, Anatole
    Operations Research, 47 (06): : 849 - 861