Real-Time Control of A2O Process in Wastewater Treatment Through Fast Deep Reinforcement Learning Based on Data-Driven Simulation Model

被引:0
|
作者
Hu, Fukang [1 ]
Zhang, Xiaodong [2 ]
Lu, Baohong [2 ]
Lin, Yue [2 ]
机构
[1] Univ New South Wales, Coll Civil Engn, Sydney, NSW 2052, Australia
[2] Hohai Univ, Coll Hydrol & Water Resources, Nanjing 210098, Peoples R China
关键词
anaerobic-anoxic-oxic; real-time control; deep reinforcement learning; deep learning; ENERGY-CONSUMPTION; TREATMENT PLANTS;
D O I
10.3390/w16243710
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Real-time control (RTC) can be applied to optimize the operation of the anaerobic-anoxic-oxic (A2O) process in wastewater treatment for energy saving. In recent years, many studies have utilized deep reinforcement learning (DRL) to construct a novel AI-based RTC system for optimizing the A2O process. However, existing DRL methods require the use of A2O process mechanistic models for training. Therefore they require specified data for the construction of mechanistic models, which is often difficult to achieve in many wastewater treatment plants (WWTPs) where data collection facilities are inadequate. Also, the DRL training is time-consuming because it needs multiple simulations of mechanistic model. To address these issues, this study designs a novel data-driven RTC method. The method first creates a simulation model for the A2O process using LSTM and an attention module (LSTM-ATT). This model can be established based on flexible data from the A2O process. The LSTM-ATT model is a simplified version of a large language model (LLM), which has much more powerful ability in analyzing time-sequence data than usual deep learning models, but with a small model architecture that avoids overfitting the A2O dynamic data. Based on this, a new DRL training framework is constructed, leveraging the rapid computational capabilities of LSTM-ATT to accelerate DRL training. The proposed method is applied to a WWTP in Western China. An LSTM-ATT simulation model is built and used to train a DRL RTC model for a reduction in aeration and qualified effluent. For the LSTM-ATT simulation, its mean squared error remains between 0.0039 and 0.0243, while its R-squared values are larger than 0.996. The control strategy provided by DQN effectively reduces the average DO setpoint values from 3.956 mg/L to 3.884 mg/L, with acceptable effluent. This study provides a pure data-driven RTC method for the A2O process in WWTPs based on DRL, which is effective in energy saving and consumption reduction. It also demonstrates that purely data-driven DRL can construct effective RTC methods for the A2O process, providing a decision-support method for management.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] An innovative multi-head attention model with BiMGRU for real-time electric vehicle charging management through deep reinforcement learning
    Mishra, Shivendu
    Choubey, Anurag
    Devarasetty, Sri Vaibhav
    Sharma, Nelson
    Misra, Rajiv
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (07): : 9993 - 10023
  • [42] Real-time disruption prediction in the plasma control system of HL-2A based on deep learning
    Yang, Zongyu
    Xia, Fan
    Song, Xianming
    Gao, Zhe
    Li, Yixuan
    Gong, Xinwen
    Dong, Yunbo
    Zhang, Yipo
    Chen, Chengyuan
    Luo, Cuiwen
    Li, Bo
    Zhu, Xiaobo
    Ji, Xiaoquan
    Li, Yonggao
    Liu, Liang
    Gao, Jinming
    Liu, Yuhang
    FUSION ENGINEERING AND DESIGN, 2022, 182
  • [43] Real-Time Simulation of Parameter-Dependent Fluid Flows through Deep Learning-Based Reduced Order Models
    Fresca, Stefania
    Manzoni, Andrea
    FLUIDS, 2021, 6 (07)
  • [44] Real-Time Bottleneck Identification and Graded Variable Speed Limit Control Framework for Mixed Traffic Flow on Highways Based on Deep Reinforcement Learning
    Shi, Yunyang
    Liu, Chengqi
    Sun, Qiang
    Liu, Chengjie
    Liu, Hongzhe
    Gu, Ziyuan
    Liu, Shaoweihua
    Feng, Shi
    Wang, Runsheng
    JOURNAL OF TRANSPORTATION ENGINEERING PART A-SYSTEMS, 2025, 151 (05)
  • [45] High flow prediction model integrating physically and deep learning based approaches with quasi real-time watershed data assimilation
    Jeong, Minyeob
    Kwon, Moonhyuk
    Cha, Jun-Ho
    Kim, Dae-Hong
    JOURNAL OF HYDROLOGY, 2024, 636
  • [46] Real-time automatic control of multi-energy system for smart district community: A coupling ensemble prediction model and safe deep reinforcement learning
    Alabi, Tobi Michael
    Lu, Lin
    Yang, Zaiyue
    ENERGY, 2024, 304
  • [47] A TM-Based Adaptive Learning Data-Model for Trajectory Tracking and Real-Time Control of a Class of Nonlinear Systems
    Li, Junkang
    Fang, Yong
    Zhang, Liming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2022, 69 (02) : 859 - 871
  • [48] Real-Time Data-Driven Microstructural Defect Detection in Scanning Electron Microscopy Images of Additively Manufactured Ti6Al4V using Advanced Deep Learning Method
    Talouki, M. Hassanzadeh
    Mirnia, M. J.
    Elyasi, M.
    INTERNATIONAL JOURNAL OF ENGINEERING, 2025, 38 (07): : 1533 - 1544
  • [49] Real-Time Energy Management Strategy for Fuel Cell/Battery Plug-In Hybrid Electric Buses Based on Deep Reinforcement Learning and State of Charge Descent Curve Trajectory Control
    Lian, Jing
    Li, Deyao
    Li, Linhui
    ENERGY TECHNOLOGY, 2024,
  • [50] Intelligent real-time control system through socket communication using deep learning-based de-hazing and object detection in an embedded board environment
    An, Je Hong
    Jung, Kwang Hyun
    Kim, Sang Yoo
    Mun, Ji Su
    Han, Min Gu
    12TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE (ICTC 2021): BEYOND THE PANDEMIC ERA WITH ICT CONVERGENCE INNOVATION, 2021, : 1494 - 1497