Adaptive Hybrid Synchronization Primitives: A Reinforcement Learning Approach

被引:0
|
作者
Ganjaliyev, Fadai [1 ]
机构
[1] ADA Univ, Sch IT & Engn, Baku, Azerbaijan
关键词
Spinning; sleeping; blocking; spin-then-block; spin-then-park; reinforcement learning;
D O I
10.14569/IJACSA.2020.0110508
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The choice of synchronization primitive used to protect shared resources is a critical aspect of application performance and scalability, which has become extremely unpredictable with the rise of multicore machines. Neither of the most commonly used contention management strategies works well for all cases: spinning provides quick lock handoff and is attractive in an undersubscribed situation but wastes processor cycles in oversubscribed scenarios, whereas blocking saves processor resources and is preferred in oversubscribed cases but adds up to the critical path by lengthening the lock handoff phase. Hybrids, such as spin-then-block and spin-then-park, tackle this problem by switching between spinning and blocking depending on the contention level on the lock or the system load. Consequently, threads follow a fixed strategy and cannot learn and adapt to changes in system behavior. To this end, it is proposed to use principles of machine learning to formulate hybrid methods as a reinforcement learning problem that will overcome these limitations. In this way, threads can intelligently learn when they should spin or sleep. The challenges of the suggested technique and future work is also briefly discussed.
引用
收藏
页码:51 / 57
页数:7
相关论文
共 50 条
  • [31] Deep Synchronization Control of Grid-Forming Converters: A Reinforcement Learning Approach
    Wu, Zhuorui
    Zhang, Meng
    Fan, Bo
    Shi, Yang
    Guan, Xiaohong
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2025, 12 (01) : 273 - 275
  • [32] Deep Synchronization Control of Grid-Forming Converters:A Reinforcement Learning Approach
    Zhuorui Wu
    Meng Zhang
    Bo Fan
    Yang Shi
    Xiaohong Guan
    IEEE/CAA Journal of Automatica Sinica, 2025, 12 (01) : 273 - 275
  • [33] Imitation and Reinforcement Learning Practical Algorithms for Motor Primitives in Robotics
    Kober, Jens
    Peters, Jan
    IEEE ROBOTICS & AUTOMATION MAGAZINE, 2010, 17 (02) : 55 - 62
  • [34] Reinforcement learning to adjust parametrized motor primitives to new situations
    Kober, Jens
    Wilhelm, Andreas
    Oztop, Erhan
    Peters, Jan
    AUTONOMOUS ROBOTS, 2012, 33 (04) : 361 - 379
  • [35] Impedance Adaptation by Reinforcement Learning with Contact Dynamic Movement Primitives
    Chang, Chunyang
    Haninger, Kevin
    Shi, Yunlei
    Yuan, Chengjie
    Chen, Zhaopeng
    Zhang, Jianwei
    2022 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2022, : 1185 - 1191
  • [36] Annotating Motion Primitives for Simplifying Action Search in Reinforcement Learning
    Sledge, Isaac J.
    Bryner, Darshan W.
    Principe, Jose C.
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (05): : 1137 - 1156
  • [37] Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks
    Nasiriany, Soroush
    Liu, Huihan
    Zhu, Yuke
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 7477 - 7484
  • [38] Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives
    Dalal, Murtaza
    Pathak, Deepak
    Salakhutdinov, Ruslan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [39] Reinforcement learning to adjust parametrized motor primitives to new situations
    Jens Kober
    Andreas Wilhelm
    Erhan Oztop
    Jan Peters
    Autonomous Robots, 2012, 33 : 361 - 379
  • [40] A HYBRID MULTIAGENT REINFORCEMENT LEARNING APPROACH USING STRATEGIES AND FUSION
    Partalas, Ioannis
    Feneris, Ioannis
    Vlahavas, Ioannis
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2008, 17 (05) : 945 - 962