Reinforcement learning-based assimilation of the WOFOST crop model

被引:0
|
作者
Chen, Haochong [1 ,2 ,3 ]
Yuan, Xiangning [1 ,2 ,3 ]
Kang, Jian [1 ,2 ,3 ]
Yang, Danni [1 ,2 ,3 ]
Yang, Tianyi [1 ,2 ,3 ]
Ao, Xiang [1 ,2 ,3 ]
Li, Sien [1 ,2 ,3 ]
机构
[1] State Key Lab Efficient Utilizat Agr Water Resourc, Beijing, Peoples R China
[2] Natl Field Sci Observat & Res Stn Efficient Water, Wuwei 733009, Peoples R China
[3] China Agr Univ, Ctr Agr Water Res China, Beijing 100083, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Dynamic crop model; Assimilation; Reinforcement learning; WOFOST;
D O I
10.1016/j.atech.2024.100604
中图分类号
S2 [农业工程];
学科分类号
0828 ;
摘要
Crop model assimilation is a crucial technique for improving the accuracy and precision of crop models by integrating observational data and crop models. Although conventional data assimilation methods such as Kalman filtering and variational methods have been widely applied, these methods often face limitations in data quality, model bias, and high computational complexity. This study explored the potential of reinforcement learning (RL) for crop model assimilation, which has the advantage of not requiring large datasets. Based on the WOFOST crop model, two RL environments were constructed: a Daily-Data Driven approach and a Time-Series Driven approach. The Proximal Policy Optimization (PPO) algorithm was used to train these environments for 100,000 iterations. The assimilation results were compared with the commonly used SUBPLEX optimization algorithm using four-year field measurement data and a public dataset with added random errors. Our results demonstrate that the Time-Series Driven RL model achieved assimilation accuracy comparable to the SUBPLEX optimization algorithm, with an average MAE of 0.65 compared to 0.76 for SUBPLEX, and a slight decrease in RMSE, while significantly reducing the computational burden by 365 times. In a multi-year stability test, the Time-Series Driven RL model and SUBPLEX had similar assimilation performance. This study demonstrates the potential of RL for crop model assimilation, providing a novel approach to overcome the limitations of conventional assimilation algorithms. The findings suggest that RL-based crop model assimilation can improve model accuracy and efficiency, with potential for practical applications in precision agriculture.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Deep Reinforcement Learning-based Quantization for Federated Learning
    Zheng, Sihui
    Dong, Yuhan
    Chen, Xiang
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [22] Reinforcement Learning-Based Adaptive Operator Selection
    Durgut, Rafet
    Aydin, Mehmet Emin
    OPTIMIZATION AND LEARNING, OLA 2021, 2021, 1443 : 29 - 41
  • [23] A Reinforcement Learning-based Cognitive MAC Protocol
    Kakalou, I.
    Papadimitriou, G. I.
    Nicopolitidis, P.
    Sarigiannidis, P. G.
    Obaidat, M. S.
    2015 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2015, : 5608 - 5612
  • [24] Reinforcement Learning-Based Guidance of Autonomous Vehicles
    Clemmons, Joseph
    Jin, Yu-Fang
    2023 24TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED, 2023, : 496 - 501
  • [25] Towards Reinforcement Learning-based Aggregate Computing
    Aguzzi, Gianluca
    Casadei, Roberto
    Viroli, Mirko
    COORDINATION MODELS AND LANGUAGES, 2022, 13271 : 72 - 91
  • [26] DEEP REINFORCEMENT LEARNING-BASED IRRIGATION SCHEDULING
    Yang, Y.
    Hu, J.
    Porter, D.
    Marek, T.
    Heflin, K.
    Kong, H.
    Sun, L.
    TRANSACTIONS OF THE ASABE, 2020, 63 (03) : 549 - 556
  • [27] A Reinforcement Learning-Based Routing Protocol in VANETs
    Sun, Yanglong
    Lin, Yiming
    Tang, Yuliang
    COMMUNICATIONS, SIGNAL PROCESSING, AND SYSTEMS, 2019, 463 : 2493 - 2500
  • [28] Reinforcement Learning-Based News Recommendation System
    Aboutorab, Hamed
    Hussain, Omar K.
    Saberi, Morteza
    Hussain, Farookh Khadeer
    Prior, Daniel
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (06) : 4493 - 4502
  • [29] Reinforcement Learning-Based Interactive Video Search
    Ma, Zhixin
    Wu, Jiaxin
    Hou, Zhijian
    Ngo, Chong-Wah
    MULTIMEDIA MODELING, MMM 2022, PT II, 2022, 13142 : 549 - 555
  • [30] A reinforcement learning-based transformed inverse model strategy for nonlinear process control
    Dutta, Debaprasad
    Upreti, Simant R.
    COMPUTERS & CHEMICAL ENGINEERING, 2023, 178