An improved deep reinforcement learning-based scheduling approach for dynamic task scheduling in cloud manufacturing

被引:7
|
作者
Wang, Xiaohan [1 ]
Zhang, Lin [1 ,4 ,5 ]
Liu, Yongkui [2 ]
Laili, Yuanjun [1 ,3 ]
机构
[1] Beihang Univ, Sch Automat Sci & Elect Engn, Beijing, Peoples R China
[2] Xidian Univ, Sch Mechanoelect Engn, Xian, Peoples R China
[3] Zhongguancun Lab, Beijing, Peoples R China
[4] State Key Lab Intelligent Mfg Syst Technol, Beijing, Peoples R China
[5] Beihang Univ, Sch Automat Sci & Elect Engn, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
Cloud manufacturing; deep reinforcement learning; dynamic scheduling; intelligent decision-making; combinatorial optimization;
D O I
10.1080/00207543.2023.2253326
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Dynamic task scheduling problem in cloud manufacturing (CMfg) is always challenging because of changing manufacturing requirements and services. To make instant decisions for task requirements, deep reinforcement learning-based (DRL-based) methods have been broadly applied to learn the scheduling policies of service providers. However, the current DRL-based scheduling methods struggle to fine-tune a pre-trained policy effectively. The resulting training from scratch takes more time and may easily overfit the environment. Additionally, most DRL-based methods with uneven action distribution and inefficient output masks largely reduce the training efficiency, thus degrading the solution quality. To this end, this paper proposes an improved DRL-based approach for dynamic task scheduling in CMfg. First, the paper uncovers the causes behind the inadequate fine-tuning ability and low training efficiency observed in existing DRL-based scheduling methods. Subsequently, a novel approach is proposed to address these issues by updating the scheduling policy while considering the distribution distance between the pre-training dataset and the in-training policy. Uncertainty weights are introduced to the loss function, and the output mask is extended to the updating procedures. Numerical experiments on thirty actual scheduling instances validate that the solution quality and generalization of the proposed approach surpass other DRL-based methods at most by 32.8% and 28.6%, respectively. Additionally, our method can effectively fine-tune a pre-trained scheduling policy, resulting in an average reward increase of up to 23.8%.
引用
收藏
页码:4014 / 4030
页数:17
相关论文
共 50 条
  • [1] Task scheduling based on deep reinforcement learning in a cloud manufacturing environment
    Dong, Tingting
    Xue, Fei
    Xiao, Chuangbai
    Li, Juntao
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2020, 32 (11):
  • [2] Deep Reinforcement Learning-Based Multi-Task Scheduling in Cloud Manufacturing Under Different Task Arrival Modes
    Ping, Yaoyao
    Liu, Yongkui
    Zhang, Lin
    Wang, Lihui
    Xu, Xun
    JOURNAL OF MANUFACTURING SCIENCE AND ENGINEERING-TRANSACTIONS OF THE ASME, 2023, 145 (08):
  • [3] Cloud-edge collaboration task scheduling in cloud manufacturing: An attention-based deep reinforcement learning approach
    Chen, Zhen
    Zhang, Lin
    Wang, Xiaohan
    Wang, Kunyu
    COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 177
  • [4] Study on deep reinforcement learning for multi-task scheduling in cloud manufacturing
    Xiao, Jiuhong
    Cai, Yishuai
    Chen, Yong
    INTERNATIONAL JOURNAL OF COMPUTER INTEGRATED MANUFACTURING, 2025,
  • [5] Deep reinforcement learning-based dynamic scheduling for resilient and sustainable manufacturing: A systematic review
    Zhang, Chao
    Juraschek, Max
    Herrmann, Christoph
    JOURNAL OF MANUFACTURING SYSTEMS, 2024, 77 : 962 - 989
  • [6] Task Scheduling in Cloud Using Deep Reinforcement Learning
    Swarup, Shashank
    Shakshuki, Elhadi M.
    Yasar, Ansar
    12TH INTERNATIONAL CONFERENCE ON AMBIENT SYSTEMS, NETWORKS AND TECHNOLOGIES (ANT) / THE 4TH INTERNATIONAL CONFERENCE ON EMERGING DATA AND INDUSTRY 4.0 (EDI40) / AFFILIATED WORKSHOPS, 2021, 184 : 42 - 51
  • [7] A framework for scheduling in cloud manufacturing with deep reinforcement learning
    Liu, Yongkui
    Zhang, Lin
    Wang, Lihui
    Xiao, Yingying
    Xu, Xun
    Wang, Mei
    2019 IEEE 17TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN), 2019, : 1775 - 1780
  • [8] Introducing an improved deep reinforcement learning algorithm for task scheduling in cloud computing
    Salari-Hamzehkhani, Behnam
    Akbari, Mehdi
    Safi-Esfahani, Faramarz
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01):
  • [9] Deep Reinforcement Learning for Dynamic Task Scheduling in Edge-Cloud Environments
    Rani, D. Mamatha
    Supreethi, K. P.
    Jayasingh, Bipin Bihari
    INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2024, 15 (10) : 837 - 850
  • [10] A deep reinforcement learning approach for dynamic task scheduling of flight tests
    Tian, Bei
    Xiao, Gang
    Shen, Yu
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (13): : 18761 - 18796