Deep Reinforcement Learning of Graph Convolutional Neural Network for Resilient Production Control of Mass Individualized Prototyping Toward Industry 5.0

被引:1
|
作者
Leng, Jiewu [1 ]
Ruan, Guolei [1 ]
Xu, Caiyu [1 ]
Zhou, Xueliang [2 ]
Xu, Kailin [1 ]
Qiao, Yan [3 ]
Liu, Zhihong [4 ]
Liu, Qiang [1 ]
机构
[1] Guangdong Univ Technol, State Key Lab Precis Elect Mfg Technol & Equipmen, Guangzhou 510006, Peoples R China
[2] Hubei Univ Automot Technol, Dept Elect & Informat Engn, Shiyan 442002, Peoples R China
[3] Macau Univ Sci & Technol, Inst Syst Engn, Macau, Peoples R China
[4] China South Ind Grp Co Ltd, Inst Automat, Mianyang 510704, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Production; Stress; Resilience; Production control; Manufacturing; Intelligent agents; Deep reinforcement learning; graph convolutional neural network; Industry; 5; 0; mass individualized prototyping (MIP); resilient production control (RPC);
D O I
10.1109/TSMC.2024.3446671
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mass individualized prototyping (MIP) is a kind of advanced and high-value-added manufacturing service. In the MIP context, the service providers usually receive massive individualized prototyping orders, and they should keep a stable state in the presence of continuous significant stresses or disruptions to maximize profit. This article proposed a graph convolutional neural network-based deep reinforcement learning (GCNN-DRL) method to achieve the resilient production control of MIP (RPC-MIP). The proposed method combines the excellent feature extraction ability of graph convolutional neural networks with the autonomous decision-making ability of deep reinforcement learning. First, a three-dimensional disjunctive graph is defined to model the RPC-MIP, and two dimensionality-reduction rules are proposed to reduce the dimensionality of the disjunctive graph. By extracting the features of the reduced-dimensional disjunctive graph through a graph isomorphic network, the convergence of the model is improved. Second, a two-stage control decision strategy is proposed in the DRL process to avoid poor solution quality in the large-scale searching space of the RPC-MIP. As a result, the high generalization capability and efficiency of the proposed GCNN-DRL method are obtained, which is verified by experiments. It could withstand system performance in the presence of continuous significant stresses of workpiece replenishment and also make fast rearrangement of dispatching decisions to achieve rapid recovery after disruptions happen in different production scenarios and system scales, thereby improving the system's resilience.
引用
收藏
页码:7092 / 7105
页数:14
相关论文
共 50 条
  • [21] GROM: A generalized routing optimization method with graph neural network and deep reinforcement learning
    Ding, Mingjie
    Guo, Yingya
    Huang, Zebo
    Lin, Bin
    Luo, Huan
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2024, 229
  • [22] Flexible robotic cell scheduling with graph neural network based deep reinforcement learning
    Wang, Donghai
    Liu, Shun
    Zou, Jing
    Qiao, Wenjun
    Jin, Sun
    JOURNAL OF MANUFACTURING SYSTEMS, 2025, 78 : 81 - 93
  • [23] Optimizing WDM Network Restoration with Deep Reinforcement Learning and Graph Neural Networks Integration
    Ampratwum, Isaac
    Nayak, Amiya
    2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 794 - 800
  • [24] Analysis of anomalous behaviour in network systems using deep reinforcement learning with convolutional neural network architecture
    Modirrousta, Mohammad Hossein
    Forghani Arani, Parisa
    Kazemi, Reza
    Aliyari-Shoorehdeli, Mahdi
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2024, 9 (06) : 1467 - 1484
  • [25] Dealing With Changes: Resilient Routing via Graph Neural Networks and Multi-Agent Deep Reinforcement Learning
    Bhavanasi, Sai Shreyas
    Pappone, Lorenzo
    Esposito, Flavio
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (03): : 2283 - 2294
  • [26] Traffic Graph Convolutional Recurrent Neural Network: A Deep Learning Framework for Network-Scale Traffic Learning and Forecasting
    Cui, Zhiyong
    Henrickson, Kristian
    Ke, Ruimin
    Wang, Yinhai
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (11) : 4883 - 4894
  • [27] Graph Convolutional Network Augmented Deep Reinforcement Learning for Dependent Task Offloading in Mobile Edge Computing
    Mo, Chu-To
    Chen, Jia-Hong
    Liao, Wanjiun
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [28] An Ensemble Convolutional Recursive Neural Network Based on Deep Reinforcement Learning for Traffic Volume Forecasting
    Pu, Xiaomin
    Liu, Pancheng
    Kang, Wei
    Yu, Chengming
    IEEE ACCESS, 2024, 12 : 196413 - 196431
  • [29] A Novel Filter-Level Deep Convolutional Neural Network Pruning Method Based on Deep Reinforcement Learning
    Feng, Yihao
    Huang, Chao
    Wang, Long
    Luo, Xiong
    Li, Qingwen
    APPLIED SCIENCES-BASEL, 2022, 12 (22):
  • [30] Flexible Job-Shop Scheduling via Graph Neural Network and Deep Reinforcement Learning
    Song, Wen
    Chen, Xinyang
    Li, Qiqiang
    Cao, Zhiguang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (02) : 1600 - 1610