Deep Reinforcement Learning of Graph Convolutional Neural Network for Resilient Production Control of Mass Individualized Prototyping Toward Industry 5.0

被引:1
|
作者
Leng, Jiewu [1 ]
Ruan, Guolei [1 ]
Xu, Caiyu [1 ]
Zhou, Xueliang [2 ]
Xu, Kailin [1 ]
Qiao, Yan [3 ]
Liu, Zhihong [4 ]
Liu, Qiang [1 ]
机构
[1] Guangdong Univ Technol, State Key Lab Precis Elect Mfg Technol & Equipmen, Guangzhou 510006, Peoples R China
[2] Hubei Univ Automot Technol, Dept Elect & Informat Engn, Shiyan 442002, Peoples R China
[3] Macau Univ Sci & Technol, Inst Syst Engn, Macau, Peoples R China
[4] China South Ind Grp Co Ltd, Inst Automat, Mianyang 510704, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Production; Stress; Resilience; Production control; Manufacturing; Intelligent agents; Deep reinforcement learning; graph convolutional neural network; Industry; 5; 0; mass individualized prototyping (MIP); resilient production control (RPC);
D O I
10.1109/TSMC.2024.3446671
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mass individualized prototyping (MIP) is a kind of advanced and high-value-added manufacturing service. In the MIP context, the service providers usually receive massive individualized prototyping orders, and they should keep a stable state in the presence of continuous significant stresses or disruptions to maximize profit. This article proposed a graph convolutional neural network-based deep reinforcement learning (GCNN-DRL) method to achieve the resilient production control of MIP (RPC-MIP). The proposed method combines the excellent feature extraction ability of graph convolutional neural networks with the autonomous decision-making ability of deep reinforcement learning. First, a three-dimensional disjunctive graph is defined to model the RPC-MIP, and two dimensionality-reduction rules are proposed to reduce the dimensionality of the disjunctive graph. By extracting the features of the reduced-dimensional disjunctive graph through a graph isomorphic network, the convergence of the model is improved. Second, a two-stage control decision strategy is proposed in the DRL process to avoid poor solution quality in the large-scale searching space of the RPC-MIP. As a result, the high generalization capability and efficiency of the proposed GCNN-DRL method are obtained, which is verified by experiments. It could withstand system performance in the presence of continuous significant stresses of workpiece replenishment and also make fast rearrangement of dispatching decisions to achieve rapid recovery after disruptions happen in different production scenarios and system scales, thereby improving the system's resilience.
引用
收藏
页码:7092 / 7105
页数:14
相关论文
共 50 条
  • [1] Production Scheduling based on Deep Reinforcement Learning using Graph Convolutional Neural Network
    Seito, Takanari
    Munakata, Satoshi
    ICAART: PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 2, 2020, : 766 - 772
  • [2] Optimizing bidding strategy in electricity market based on graph convolutional neural network and deep reinforcement learning
    Weng, Haoen
    Hu, Yongli
    Liang, Min
    Xi, Jiayang
    Yin, Baocai
    APPLIED ENERGY, 2025, 380
  • [3] Convolutional Neural Network Based Unmanned Ground Vehicle Control via Deep Reinforcement Learning
    Liu, Yongxin
    He, Qiang
    Wang, Junhui
    Wang, Zhiliang
    Chen, Tianheng
    Jin, Shichen
    Zhang, Chi
    Wang, Zhiqiang
    2022 4TH INTERNATIONAL CONFERENCE ON CONTROL AND ROBOTICS, ICCR, 2022, : 470 - 475
  • [4] Graph Convolutional Network-Based Topology Embedded Deep Reinforcement Learning for Voltage Stability Control
    Hossain, Ramij R.
    Huang, Qiuhua
    Huang, Renke
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2021, 36 (05) : 4848 - 4851
  • [5] Process Industry Scheduling Based on Graph Neural Network and Reinforcement Learning
    Wu, Zhenyu
    Wang, Yin
    39TH YOUTH ACADEMIC ANNUAL CONFERENCE OF CHINESE ASSOCIATION OF AUTOMATION, YAC 2024, 2024, : 1598 - 1603
  • [6] Traffic Signal Control Based on Reinforcement Learning with Graph Convolutional Neural Nets
    Nishi, Tomoki
    Otaki, Keisuke
    Hayakawa, Keiichiro
    Yoshimura, Takayoshi
    2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2018, : 877 - 883
  • [7] Automatic Virtual Network Embedding: A Deep Reinforcement Learning Approach With Graph Convolutional Networks
    Yan, Zhongxia
    Ge, Jingguo
    Wu, Yulei
    Li, Liangxiong
    Li, Tong
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (06) : 1040 - 1057
  • [8] Transient Stability Preventive Control Based on Graph Convolution Neural Network and Transfer Deep Reinforcement Learning
    Wang, Tianjing
    Tang, Yong
    CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2025, 11 (01): : 136 - 149
  • [9] Graph Neural Network Based Deep Reinforcement Learning for Volt-Var Control in Distribution Grids
    Ma, Aoxiang
    Cao, Jun
    Rodriguez Cortes, Pedro
    IEEE 15TH INTERNATIONAL SYMPOSIUM ON POWER ELECTRONICS FOR DISTRIBUTED GENERATION SYSTEMS, PEDG 2024, 2024,
  • [10] Dual deep reinforcement learning agents-based integrated order acceptance and scheduling of mass individualized prototyping
    Leng, Jiewu
    Guo, Jiwei
    Zhang, Hu
    Xu, Kailin
    Qiao, Yan
    Zheng, Pai
    Shen, Weiming
    JOURNAL OF CLEANER PRODUCTION, 2023, 427