Adaptive Hybrid Synchronization Primitives: A Reinforcement Learning Approach

被引:0
|
作者
Ganjaliyev, Fadai [1 ]
机构
[1] ADA Univ, Sch IT & Engn, Baku, Azerbaijan
关键词
Spinning; sleeping; blocking; spin-then-block; spin-then-park; reinforcement learning;
D O I
10.14569/IJACSA.2020.0110508
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The choice of synchronization primitive used to protect shared resources is a critical aspect of application performance and scalability, which has become extremely unpredictable with the rise of multicore machines. Neither of the most commonly used contention management strategies works well for all cases: spinning provides quick lock handoff and is attractive in an undersubscribed situation but wastes processor cycles in oversubscribed scenarios, whereas blocking saves processor resources and is preferred in oversubscribed cases but adds up to the critical path by lengthening the lock handoff phase. Hybrids, such as spin-then-block and spin-then-park, tackle this problem by switching between spinning and blocking depending on the contention level on the lock or the system load. Consequently, threads follow a fixed strategy and cannot learn and adapt to changes in system behavior. To this end, it is proposed to use principles of machine learning to formulate hybrid methods as a reinforcement learning problem that will overcome these limitations. In this way, threads can intelligently learn when they should spin or sleep. The challenges of the suggested technique and future work is also briefly discussed.
引用
收藏
页码:51 / 57
页数:7
相关论文
共 50 条
  • [1] Hybrid approach to Reinforcement Learning
    Boulebtateche, Brahim
    Fezari, Mourad
    Boughazi, Mohamed
    INTELLIGENT SYSTEMS AND AUTOMATION, 2008, 1019 : 216 - 220
  • [2] ANOTHER APPROACH TO THE IMPLEMENTATION OF SYNCHRONIZATION PRIMITIVES
    HOPPE, J
    SOFTWARE-PRACTICE & EXPERIENCE, 1986, 16 (12): : 1109 - 1116
  • [3] Scheduling of Digital Twin Synchronization in Industrial Internet of Things: A Hybrid Inverse Reinforcement Learning Approach
    Zhang, Qiuyang
    Wang, Ying
    Li, Zhendong
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (05): : 5137 - 5147
  • [4] A Hybrid Approach for Learning to Shift and Grasp with Elaborate Motion Primitives
    Feldman, Zohar
    Ziesche, Hanna
    Vien, Ngo Anh
    Di Castro, Dotan
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 6365 - 6371
  • [5] Reinforcement learning for parameterized motor primitives
    Peters, Jan
    Schaal, Stefan
    2006 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORK PROCEEDINGS, VOLS 1-10, 2006, : 73 - +
  • [6] ANOTHER APPROACH TO THE IMPLEMENTATION OF SYNCHRONIZATION PRIMITIVES.
    Hoppe, Jiri
    Software - Practice and Experience, 1986, 16 (12) : 1109 - 1116
  • [7] Model primitives for hierarchical lifelong reinforcement learning
    Bohan Wu
    Jayesh K. Gupta
    Mykel Kochenderfer
    Autonomous Agents and Multi-Agent Systems, 2020, 34
  • [8] Model primitives for hierarchical lifelong reinforcement learning
    Wu, Bohan
    Gupta, Jayesh K.
    Kochenderfer, Mykel
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2020, 34 (01)
  • [9] A Framework for Learning Dynamic Movement Primitives with Deep Reinforcement Learning
    Noohian, Amirhossein
    Raisi, Mehran
    Khodaygan, Saeed
    2022 10TH RSI INTERNATIONAL CONFERENCE ON ROBOTICS AND MECHATRONICS (ICROM), 2022, : 329 - 334
  • [10] Learning and Synchronization of Movement Primitives for Bimanual Manipulation Tasks
    Thota, Pavan Kumar
    Ravichandar, Harish Chaandar
    Dani, Ashwin P.
    2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 945 - 950