Real-Time Control of A2O Process in Wastewater Treatment Through Fast Deep Reinforcement Learning Based on Data-Driven Simulation Model

被引:0
|
作者
Hu, Fukang [1 ]
Zhang, Xiaodong [2 ]
Lu, Baohong [2 ]
Lin, Yue [2 ]
机构
[1] Univ New South Wales, Coll Civil Engn, Sydney, NSW 2052, Australia
[2] Hohai Univ, Coll Hydrol & Water Resources, Nanjing 210098, Peoples R China
关键词
anaerobic-anoxic-oxic; real-time control; deep reinforcement learning; deep learning; ENERGY-CONSUMPTION; TREATMENT PLANTS;
D O I
10.3390/w16243710
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Real-time control (RTC) can be applied to optimize the operation of the anaerobic-anoxic-oxic (A2O) process in wastewater treatment for energy saving. In recent years, many studies have utilized deep reinforcement learning (DRL) to construct a novel AI-based RTC system for optimizing the A2O process. However, existing DRL methods require the use of A2O process mechanistic models for training. Therefore they require specified data for the construction of mechanistic models, which is often difficult to achieve in many wastewater treatment plants (WWTPs) where data collection facilities are inadequate. Also, the DRL training is time-consuming because it needs multiple simulations of mechanistic model. To address these issues, this study designs a novel data-driven RTC method. The method first creates a simulation model for the A2O process using LSTM and an attention module (LSTM-ATT). This model can be established based on flexible data from the A2O process. The LSTM-ATT model is a simplified version of a large language model (LLM), which has much more powerful ability in analyzing time-sequence data than usual deep learning models, but with a small model architecture that avoids overfitting the A2O dynamic data. Based on this, a new DRL training framework is constructed, leveraging the rapid computational capabilities of LSTM-ATT to accelerate DRL training. The proposed method is applied to a WWTP in Western China. An LSTM-ATT simulation model is built and used to train a DRL RTC model for a reduction in aeration and qualified effluent. For the LSTM-ATT simulation, its mean squared error remains between 0.0039 and 0.0243, while its R-squared values are larger than 0.996. The control strategy provided by DQN effectively reduces the average DO setpoint values from 3.956 mg/L to 3.884 mg/L, with acceptable effluent. This study provides a pure data-driven RTC method for the A2O process in WWTPs based on DRL, which is effective in energy saving and consumption reduction. It also demonstrates that purely data-driven DRL can construct effective RTC methods for the A2O process, providing a decision-support method for management.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Combined Sewer Overflow and Flooding Mitigation Through a Reliable Real-Time Control Based on Multi-Reinforcement Learning and Model Predictive Control
    Tian, Wenchong
    Liao, Zhenliang
    Zhi, Guozheng
    Zhang, Zhiyu
    Wang, Xuan
    WATER RESOURCES RESEARCH, 2022, 58 (07)
  • [32] Deep reinforcement learning-based optimal data-driven control of battery energy storage for power system frequency support
    Yan, Ziming
    Xu, Yan
    Wang, Yu
    Feng, Xue
    IET GENERATION TRANSMISSION & DISTRIBUTION, 2020, 14 (25) : 6071 - 6078
  • [33] Deep Reinforcement Learning-Based Dynamic Droop Control Strategy for Real-Time Optimal Operation and Frequency Regulation
    Lee, Woon-Gyu
    Kim, Hak-Man
    IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, 2025, 16 (01) : 284 - 294
  • [34] State Selection and Cost Estimation for Deep Reinforcement Learning-Based Real-Time Control of Urban Drainage System
    Tian, Wenchong
    Xin, Kunlun
    Zhang, Zhiyu
    Liao, Zhenliang
    Li, Fei
    WATER, 2023, 15 (08)
  • [35] A Review of physics-based and data-driven models for real-time control of polymer electrolyte membrane fuel cells
    Zhao, Jian
    Li, Xianguo
    Shum, Chris
    McPhee, John
    ENERGY AND AI, 2021, 6
  • [36] Data-driven real-time prediction for attitude and position of super-large diameter shield using a hybrid deep learning approach
    Fu, Yanbin
    Chen, Lei
    Xiong, Hao
    Chen, Xiangsheng
    Lu, Andian
    Zeng, Yi
    Wang, Beiling
    UNDERGROUND SPACE, 2024, 15 : 275 - 297
  • [37] Model-Free Real-Time Autonomous Control for a Residential Multi-Energy System Using Deep Reinforcement Learning
    Ye, Yujian
    Qiu, Dawei
    Wu, Xiaodong
    Strbac, Goran
    Ward, Jonathan
    IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (04) : 3068 - 3082
  • [38] Coordinated real-time power dispatch for integrated transmission and distribution networks based on model-based optimization and deep reinforcement learning
    Yang, Xu
    Sun, Yong
    Wu, Wenchuan
    Liu, Haotian
    Wang, Yao
    ENERGY REPORTS, 2023, 9 : 1011 - 1020
  • [39] Real-time online charging control of electric vehicle charging station based on a multi-agent deep reinforcement learning
    Li, Yujing
    Zhang, Zhisheng
    Xing, Qiang
    ENERGY, 2025, 319
  • [40] Coordinated real-time power dispatch for integrated transmission and distribution networks based on model-based optimization and deep reinforcement learning
    Yang, Xu
    Sun, Yong
    Wu, Wenchuan
    Liu, Haotian
    Wang, Yao
    ENERGY REPORTS, 2023, 9 : 1011 - 1020