Adaptive Hybrid Synchronization Primitives: A Reinforcement Learning Approach

被引:0
|
作者
Ganjaliyev, Fadai [1 ]
机构
[1] ADA Univ, Sch IT & Engn, Baku, Azerbaijan
关键词
Spinning; sleeping; blocking; spin-then-block; spin-then-park; reinforcement learning;
D O I
10.14569/IJACSA.2020.0110508
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The choice of synchronization primitive used to protect shared resources is a critical aspect of application performance and scalability, which has become extremely unpredictable with the rise of multicore machines. Neither of the most commonly used contention management strategies works well for all cases: spinning provides quick lock handoff and is attractive in an undersubscribed situation but wastes processor cycles in oversubscribed scenarios, whereas blocking saves processor resources and is preferred in oversubscribed cases but adds up to the critical path by lengthening the lock handoff phase. Hybrids, such as spin-then-block and spin-then-park, tackle this problem by switching between spinning and blocking depending on the contention level on the lock or the system load. Consequently, threads follow a fixed strategy and cannot learn and adapt to changes in system behavior. To this end, it is proposed to use principles of machine learning to formulate hybrid methods as a reinforcement learning problem that will overcome these limitations. In this way, threads can intelligently learn when they should spin or sleep. The challenges of the suggested technique and future work is also briefly discussed.
引用
收藏
页码:51 / 57
页数:7
相关论文
共 50 条
  • [21] Adaptive Fuzzy Watkins: A New Adaptive Approach for Eligibility Traces in Reinforcement Learning
    Matin Shokri
    Seyed Hossein Khasteh
    Amin Aminifar
    International Journal of Fuzzy Systems, 2019, 21 : 1443 - 1454
  • [22] Adaptive Fuzzy Watkins: A New Adaptive Approach for Eligibility Traces in Reinforcement Learning
    Shokri, Matin
    Khasteh, Seyed Hossein
    Aminifar, Amin
    INTERNATIONAL JOURNAL OF FUZZY SYSTEMS, 2019, 21 (05) : 1443 - 1454
  • [23] Adaptive Supply Chain: Demand-Supply Synchronization Using Deep Reinforcement Learning
    Kegenbekov, Zhandos
    Jackson, Ilya
    ALGORITHMS, 2021, 14 (08)
  • [24] Adaptive Dynamic Bipartite Graph Matching: A Reinforcement Learning Approach
    Wang, Yansheng
    Tong, Yongxin
    Long, Cheng
    Xu, Pan
    Xu, Ke
    Lv, Weifeng
    2019 IEEE 35TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2019), 2019, : 1478 - 1489
  • [25] TREATING EPILEPSY VIA ADAPTIVE NEUROSTIMULATION: A REINFORCEMENT LEARNING APPROACH
    Pineau, Joelle
    Guez, Arthur
    Vincent, Robert
    Panuccio, Gabriella
    Avoli, Massimo
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2009, 19 (04) : 227 - 240
  • [26] Adaptive Quantitative Trading: An Imitative Deep Reinforcement Learning Approach
    Liu, Yang
    Liu, Qi
    Zhao, Hongke
    Pan, Zhen
    Liu, Chuanren
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 2128 - 2135
  • [27] An Adaptive Ant Colony Optimization Algorithm Approach to Reinforcement Learning
    Jiang, Tanfei
    Liu, Zhijng
    PROCEEDINGS OF THE 2008 INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN, VOL 1, 2008, : 352 - 355
  • [28] Self-Adaptive Capacity Controller: A Reinforcement Learning Approach
    Tomas, Luis
    Masoumzadeh, Seyed Saeid
    Hlavacs, Helmut
    2016 IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING (ICAC), 2016, : 233 - 234
  • [29] A Reinforcement Learning Approach to Adaptive Redundancy for Routing in Tactical Networks
    Johnston, Matthew
    Danilov, Claudiu
    Larson, Kevin
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 267 - 272
  • [30] Adaptive Learning: A New Decentralized Reinforcement Learning Approach for Cooperative Multiagent Systems
    Li, Meng-Lin
    Chen, Shaofei
    Chen, Jing
    IEEE ACCESS, 2020, 8 : 99404 - 99421