Real-Time Control of A2O Process in Wastewater Treatment Through Fast Deep Reinforcement Learning Based on Data-Driven Simulation Model

被引:0
|
作者
Hu, Fukang [1 ]
Zhang, Xiaodong [2 ]
Lu, Baohong [2 ]
Lin, Yue [2 ]
机构
[1] Univ New South Wales, Coll Civil Engn, Sydney, NSW 2052, Australia
[2] Hohai Univ, Coll Hydrol & Water Resources, Nanjing 210098, Peoples R China
关键词
anaerobic-anoxic-oxic; real-time control; deep reinforcement learning; deep learning; ENERGY-CONSUMPTION; TREATMENT PLANTS;
D O I
10.3390/w16243710
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Real-time control (RTC) can be applied to optimize the operation of the anaerobic-anoxic-oxic (A2O) process in wastewater treatment for energy saving. In recent years, many studies have utilized deep reinforcement learning (DRL) to construct a novel AI-based RTC system for optimizing the A2O process. However, existing DRL methods require the use of A2O process mechanistic models for training. Therefore they require specified data for the construction of mechanistic models, which is often difficult to achieve in many wastewater treatment plants (WWTPs) where data collection facilities are inadequate. Also, the DRL training is time-consuming because it needs multiple simulations of mechanistic model. To address these issues, this study designs a novel data-driven RTC method. The method first creates a simulation model for the A2O process using LSTM and an attention module (LSTM-ATT). This model can be established based on flexible data from the A2O process. The LSTM-ATT model is a simplified version of a large language model (LLM), which has much more powerful ability in analyzing time-sequence data than usual deep learning models, but with a small model architecture that avoids overfitting the A2O dynamic data. Based on this, a new DRL training framework is constructed, leveraging the rapid computational capabilities of LSTM-ATT to accelerate DRL training. The proposed method is applied to a WWTP in Western China. An LSTM-ATT simulation model is built and used to train a DRL RTC model for a reduction in aeration and qualified effluent. For the LSTM-ATT simulation, its mean squared error remains between 0.0039 and 0.0243, while its R-squared values are larger than 0.996. The control strategy provided by DQN effectively reduces the average DO setpoint values from 3.956 mg/L to 3.884 mg/L, with acceptable effluent. This study provides a pure data-driven RTC method for the A2O process in WWTPs based on DRL, which is effective in energy saving and consumption reduction. It also demonstrates that purely data-driven DRL can construct effective RTC methods for the A2O process, providing a decision-support method for management.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Reinforcement learning control method for real-time hybrid simulation based on deep deterministic policy gradient algorithm
    Li, Ning
    Tang, Jichuan
    Li, Zhong-Xian
    Gao, Xiuyu
    Structural Control and Health Monitoring, 2022, 29 (10)
  • [22] Real-time dynamic scheduling for garment sewing process based on deep reinforcement learning
    Liu F.
    Xu J.
    Ke W.
    Fangzhi Xuebao/Journal of Textile Research, 2022, 43 (09): : 41 - 48
  • [23] Reinforcement learning control method for real-time hybrid simulation based on deep deterministic policy gradient algorithm
    Li, Ning
    Tang, Jichuan
    Li, Zhong-Xian
    Gao, Xiuyu
    STRUCTURAL CONTROL & HEALTH MONITORING, 2022, 29 (10):
  • [24] Event-Driven Model Predictive Control With Deep Learning for Wastewater Treatment Process
    Wang, Gongming
    Bi, Jing
    Jia, Qing-Shan
    Qiao, Junfei
    Wang, Lei
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (05) : 6398 - 6407
  • [25] A Data-Driven Model-Reference Adaptive Control Approach Based on Reinforcement Learning
    Abouheaf, Mohammed
    Gueaieb, Wail
    Spinello, Davide
    Al-Sharhan, Salah
    2021 IEEE INTERNATIONAL SYMPOSIUM ON ROBOTIC AND SENSORS ENVIRONMENTS (ROSE 2021), 2021,
  • [26] Two-Stage Data-Driven optimal energy management and dynamic Real-Time operation in networked microgrid based deep reinforcement learning approach
    Hedayatnia, Atefeh
    Ghafourian, Javid
    Sepehrzad, Reza
    Al-Durrad, Ahmed
    Anvari-Moghaddam, Amjad
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2024, 160
  • [27] Real-Time Scheduling of Cloud Manufacturing Services Based on Dynamic Data-Driven Simulation
    Zhou, Longfei
    Zhang, Lin
    Ren, Lei
    Wang, Jian
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (09) : 5042 - 5051
  • [28] Real-time power system generator tripping control based on deep reinforcement learning
    Lin, Bilin
    Wang, Huaiyuan
    Zhang, Yang
    Wen, Buying
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2022, 141
  • [29] Deep Reinforcement Learning Based Real-Time Renewable Energy Bidding with Battery Control
    Jeong, Jaeik
    Kim, Seung Wan
    Kim, Hongseok
    IEEE Transactions on Energy Markets, Policy and Regulation, 2023, 1 (02): : 85 - 96
  • [30] Real-time planning and collision avoidance control method based on deep reinforcement learning
    Xu, Xinli
    Cai, Peng
    Cao, Yunlong
    Chu, Zhenzhong
    Zhu, Wenbo
    Zhang, Weidong
    OCEAN ENGINEERING, 2023, 281