Deep Reinforcement Learning of Graph Convolutional Neural Network for Resilient Production Control of Mass Individualized Prototyping Toward Industry 5.0

被引:1
|
作者
Leng, Jiewu [1 ]
Ruan, Guolei [1 ]
Xu, Caiyu [1 ]
Zhou, Xueliang [2 ]
Xu, Kailin [1 ]
Qiao, Yan [3 ]
Liu, Zhihong [4 ]
Liu, Qiang [1 ]
机构
[1] Guangdong Univ Technol, State Key Lab Precis Elect Mfg Technol & Equipmen, Guangzhou 510006, Peoples R China
[2] Hubei Univ Automot Technol, Dept Elect & Informat Engn, Shiyan 442002, Peoples R China
[3] Macau Univ Sci & Technol, Inst Syst Engn, Macau, Peoples R China
[4] China South Ind Grp Co Ltd, Inst Automat, Mianyang 510704, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Production; Stress; Resilience; Production control; Manufacturing; Intelligent agents; Deep reinforcement learning; graph convolutional neural network; Industry; 5; 0; mass individualized prototyping (MIP); resilient production control (RPC);
D O I
10.1109/TSMC.2024.3446671
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mass individualized prototyping (MIP) is a kind of advanced and high-value-added manufacturing service. In the MIP context, the service providers usually receive massive individualized prototyping orders, and they should keep a stable state in the presence of continuous significant stresses or disruptions to maximize profit. This article proposed a graph convolutional neural network-based deep reinforcement learning (GCNN-DRL) method to achieve the resilient production control of MIP (RPC-MIP). The proposed method combines the excellent feature extraction ability of graph convolutional neural networks with the autonomous decision-making ability of deep reinforcement learning. First, a three-dimensional disjunctive graph is defined to model the RPC-MIP, and two dimensionality-reduction rules are proposed to reduce the dimensionality of the disjunctive graph. By extracting the features of the reduced-dimensional disjunctive graph through a graph isomorphic network, the convergence of the model is improved. Second, a two-stage control decision strategy is proposed in the DRL process to avoid poor solution quality in the large-scale searching space of the RPC-MIP. As a result, the high generalization capability and efficiency of the proposed GCNN-DRL method are obtained, which is verified by experiments. It could withstand system performance in the presence of continuous significant stresses of workpiece replenishment and also make fast rearrangement of dispatching decisions to achieve rapid recovery after disruptions happen in different production scenarios and system scales, thereby improving the system's resilience.
引用
收藏
页码:7092 / 7105
页数:14
相关论文
共 50 条
  • [41] The application of heterogeneous graph neural network and deep reinforcement learning in hybrid flow shop scheduling problem
    Zhao, Yejian
    Luo, Xiaochuan
    Zhang, Yulin
    COMPUTERS & INDUSTRIAL ENGINEERING, 2024, 187
  • [42] Federated Digital Twins: A Scheduling Approach Based on Temporal Graph Neural Network and Deep Reinforcement Learning
    Kim, Young-Jin
    Kim, Hanjin
    Ha, Beomsu
    Kim, Won-Tae
    IEEE ACCESS, 2025, 13 : 20763 - 20777
  • [43] Dynamic Job-Shop Scheduling Problems Using Graph Neural Network and Deep Reinforcement Learning
    Liu, Chien-Liang
    Huang, Tzu-Hsuan
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (11): : 6836 - 6848
  • [44] A deep reinforcement learning method based on a multiexpert graph neural network for flexible job shop scheduling
    Huang, Dailin
    Zhao, Hong
    Tian, Weiquan
    Chen, Kangping
    COMPUTERS & INDUSTRIAL ENGINEERING, 2025, 200
  • [45] A mass correlation based deep learning approach using deep Convolutional neural network to classify the brain tumor
    Satyanarayana, Gandi
    Naidu, P. Appala
    Desanamukula, Venkata Subbaiah
    Kumar, Kadupukotla Satish
    Rao, B. Chinna
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 81
  • [46] Reinforcement Learning Based Multi-Agent Resilient Control: From Deep Neural Networks to an Adaptive Law
    Hou, Jian
    Wang, Fangyuan
    Wang, Lili
    Chen, Zhiyong
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 7737 - 7745
  • [47] Intelligent path planning algorithm system for printed display manufacturing using graph convolutional neural network and reinforcement learning
    Xiong, Jiacong
    Chen, Jiankui
    Chen, Wei
    Yue, Xiao
    Zhao, Ziwei
    Yin, Zhouping
    JOURNAL OF MANUFACTURING SYSTEMS, 2025, 79 : 73 - 85
  • [48] Graph neural network and multi-agent reinforcement learning for machine-process-system integrated control to optimize production yield
    Huang, Jing
    Su, Jianyu
    Chang, Qing
    JOURNAL OF MANUFACTURING SYSTEMS, 2022, 64 : 81 - 93
  • [49] Graph neural network and reinforcement learning for multi-agent cooperative control of connected autonomous vehicles
    Chen, Sikai
    Dong, Jiqian
    Ha, Paul
    Li, Yujie
    Labi, Samuel
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2021, 36 (07) : 838 - 857
  • [50] Intelligent Caching with Graph Neural Network-Based Deep Reinforcement Learning on SDN-Based ICN
    Hou, Jiacheng
    Tao, Tianhao
    Lu, Haoye
    Nayak, Amiya
    FUTURE INTERNET, 2023, 15 (08)