Real-Time Control of A2O Process in Wastewater Treatment Through Fast Deep Reinforcement Learning Based on Data-Driven Simulation Model

被引:0
|
作者
Hu, Fukang [1 ]
Zhang, Xiaodong [2 ]
Lu, Baohong [2 ]
Lin, Yue [2 ]
机构
[1] Univ New South Wales, Coll Civil Engn, Sydney, NSW 2052, Australia
[2] Hohai Univ, Coll Hydrol & Water Resources, Nanjing 210098, Peoples R China
关键词
anaerobic-anoxic-oxic; real-time control; deep reinforcement learning; deep learning; ENERGY-CONSUMPTION; TREATMENT PLANTS;
D O I
10.3390/w16243710
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Real-time control (RTC) can be applied to optimize the operation of the anaerobic-anoxic-oxic (A2O) process in wastewater treatment for energy saving. In recent years, many studies have utilized deep reinforcement learning (DRL) to construct a novel AI-based RTC system for optimizing the A2O process. However, existing DRL methods require the use of A2O process mechanistic models for training. Therefore they require specified data for the construction of mechanistic models, which is often difficult to achieve in many wastewater treatment plants (WWTPs) where data collection facilities are inadequate. Also, the DRL training is time-consuming because it needs multiple simulations of mechanistic model. To address these issues, this study designs a novel data-driven RTC method. The method first creates a simulation model for the A2O process using LSTM and an attention module (LSTM-ATT). This model can be established based on flexible data from the A2O process. The LSTM-ATT model is a simplified version of a large language model (LLM), which has much more powerful ability in analyzing time-sequence data than usual deep learning models, but with a small model architecture that avoids overfitting the A2O dynamic data. Based on this, a new DRL training framework is constructed, leveraging the rapid computational capabilities of LSTM-ATT to accelerate DRL training. The proposed method is applied to a WWTP in Western China. An LSTM-ATT simulation model is built and used to train a DRL RTC model for a reduction in aeration and qualified effluent. For the LSTM-ATT simulation, its mean squared error remains between 0.0039 and 0.0243, while its R-squared values are larger than 0.996. The control strategy provided by DQN effectively reduces the average DO setpoint values from 3.956 mg/L to 3.884 mg/L, with acceptable effluent. This study provides a pure data-driven RTC method for the A2O process in WWTPs based on DRL, which is effective in energy saving and consumption reduction. It also demonstrates that purely data-driven DRL can construct effective RTC methods for the A2O process, providing a decision-support method for management.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Comparative Study of Data-Driven and Model-Based Real-Time Prediction during Rubber Curing Process
    Frank, Tobias
    Bosselmann, Steffen
    Wielitzka, Mark
    Ortmaier, Tobias
    2018 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2018, : 164 - 169
  • [32] Data Driven Real-Time Dynamic Voltage Control Using Decentralized Execution Multi-Agent Deep Reinforcement Learning
    Wang, Yuling
    Vittal, Vijay
    IEEE OPEN ACCESS JOURNAL OF POWER AND ENERGY, 2024, 11 : 508 - 519
  • [33] Deep Reinforcement Learning with Uncertain Data for Real-Time Stormwater System Control and Flood Mitigation
    Saliba, Sami M.
    Bowes, Benjamin D.
    Adams, Stephen
    Beling, Peter A.
    Goodall, Jonathan L.
    WATER, 2020, 12 (11) : 1 - 19
  • [34] Data-Driven Reinforcement Learning-Based Real-Time Energy Management System for Plug-In Hybrid Electric Vehicles
    Qi, Xuewei
    Wu, Guoyuan
    Boriboonsomsin, Kanok
    Barth, Matthew J.
    Gonder, Jeffrey
    TRANSPORTATION RESEARCH RECORD, 2016, (2572) : 1 - 8
  • [35] Deep learning based simulators for the phosphorus removal process control in wastewater treatment via deep reinforcement learning algorithms
    Mohammadi, Esmaeel
    Stokholm-Bjerregaard, Mikkel
    Hansen, Aviaja Anna
    Nielsen, Per Halkjaer
    Ortiz-Arroyo, Daniel
    Durdevic, Petar
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [36] A Proactive Real-Time Control Strategy Based on Data-Driven Transit Demand Prediction
    Wang, Wensi
    Zong, Fang
    Yao, Baozhen
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (04) : 2404 - 2416
  • [37] Development of a Connected Corridor Real-Time Data-Driven Traffic Digital Twin Simulation Model
    Saroj, Abhilasha J.
    Roy, Somdut
    Guin, Angshuman
    Hunter, Michael
    JOURNAL OF TRANSPORTATION ENGINEERING PART A-SYSTEMS, 2021, 147 (12)
  • [38] Data-driven disturbance compensation control for discrete-time systems based on reinforcement learning
    Li, Lanyue
    Li, Jinna
    Cao, Jiangtao
    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, 2024,
  • [39] Real-time machine learning for in situ quality control in hybrid manufacturing: a data-driven approach
    Mavaluru, Dinesh
    Tipparti, Akanksha
    Tipparti, Anil Kumar
    Ameenuddin, Mohammed
    Ramakrishnan, Jayabrabu
    Samrin, Rafath
    INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2025,
  • [40] Control-Tutored Reinforcement Learning: Towards the Integration of Data-Driven and Model-Based Control
    DeLellis, Francesco
    Coraggio, Marco
    Russo, Giovanni
    Musolesi, Mirco
    di Bernardo, Mario
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 168, 2022, 168