Reinforcement learning-based assimilation of the WOFOST crop model

被引:0
|
作者
Chen, Haochong [1 ,2 ,3 ]
Yuan, Xiangning [1 ,2 ,3 ]
Kang, Jian [1 ,2 ,3 ]
Yang, Danni [1 ,2 ,3 ]
Yang, Tianyi [1 ,2 ,3 ]
Ao, Xiang [1 ,2 ,3 ]
Li, Sien [1 ,2 ,3 ]
机构
[1] State Key Lab Efficient Utilizat Agr Water Resourc, Beijing, Peoples R China
[2] Natl Field Sci Observat & Res Stn Efficient Water, Wuwei 733009, Peoples R China
[3] China Agr Univ, Ctr Agr Water Res China, Beijing 100083, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Dynamic crop model; Assimilation; Reinforcement learning; WOFOST;
D O I
10.1016/j.atech.2024.100604
中图分类号
S2 [农业工程];
学科分类号
0828 ;
摘要
Crop model assimilation is a crucial technique for improving the accuracy and precision of crop models by integrating observational data and crop models. Although conventional data assimilation methods such as Kalman filtering and variational methods have been widely applied, these methods often face limitations in data quality, model bias, and high computational complexity. This study explored the potential of reinforcement learning (RL) for crop model assimilation, which has the advantage of not requiring large datasets. Based on the WOFOST crop model, two RL environments were constructed: a Daily-Data Driven approach and a Time-Series Driven approach. The Proximal Policy Optimization (PPO) algorithm was used to train these environments for 100,000 iterations. The assimilation results were compared with the commonly used SUBPLEX optimization algorithm using four-year field measurement data and a public dataset with added random errors. Our results demonstrate that the Time-Series Driven RL model achieved assimilation accuracy comparable to the SUBPLEX optimization algorithm, with an average MAE of 0.65 compared to 0.76 for SUBPLEX, and a slight decrease in RMSE, while significantly reducing the computational burden by 365 times. In a multi-year stability test, the Time-Series Driven RL model and SUBPLEX had similar assimilation performance. This study demonstrates the potential of RL for crop model assimilation, providing a novel approach to overcome the limitations of conventional assimilation algorithms. The findings suggest that RL-based crop model assimilation can improve model accuracy and efficiency, with potential for practical applications in precision agriculture.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] A Deep Q-Network Reinforcement Learning-Based Model for Autonomous Driving
    Ahmed, Marwa
    Lim, Chee Peng
    Nahavandi, Saeid
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 739 - 744
  • [32] Reinforcement learning-based mobile robot navigation
    Altuntas, Nihal
    Imal, Erkan
    Emanet, Nahit
    Ozturk, Ceyda Nur
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2016, 24 (03) : 1747 - 1767
  • [33] Reinforcement Learning-Based Fed-Batch Optimization with Reaction Surrogate Model
    Ma, Yan
    Wang, Zhenyu
    Castillo, Ivan
    Rendall, Ricardo
    Bindlish, Rahul
    Ashcraft, Brian
    Bentley, David
    Benton, Michael G.
    Romagnoli, Jose A.
    Chiang, Leo H.
    2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 2581 - 2586
  • [34] Reinforcement learning-based aggregation for robot swarms
    Amjadi, Arash Sadeghi
    Bilaloglu, Cem
    Turgut, Ali Emre
    Na, Seongin
    Sahin, Erol
    Krajnik, Tomas
    Arvin, Farshad
    ADAPTIVE BEHAVIOR, 2024, 32 (03) : 265 - 281
  • [35] Testing the Plasticity of Reinforcement Learning-based Systems
    Biagiola, Matteo
    Tonella, Paolo
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2022, 31 (04)
  • [36] Reinforcement Learning-Based Formation Control of Autonomous Underwater Vehicles with Model Interferences
    Cao, Wenqiang
    Yan, Jing
    Yang, Xian
    Luo, Xiaoyuan
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 4020 - 4025
  • [37] Reinforcement Learning-Based Multimodal Model for the Stock Investment Portfolio Management Task
    Du, Sha
    Shen, Hailong
    ELECTRONICS, 2024, 13 (19)
  • [38] Time Series Anomaly Detection via Reinforcement Learning-Based Model Selection
    Zhang, Jiuqi Elise
    Wu, Di
    Boulet, Benoit
    2022 IEEE CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (CCECE), 2022, : 193 - 199
  • [39] Reinforcement learning-based multi-model ensemble for ocean waves forecasting
    Huang, Weinan
    Wu, Xiangrong
    Xia, Haofeng
    Zhu, Xiaowen
    Gong, Yijie
    Sun, Xuehai
    FRONTIERS IN MARINE SCIENCE, 2025, 12
  • [40] A Hybrid Reinforcement Learning-Based Model for the Vehicle Routing Problem in Transportation Logistics
    Phiboonbanakit, Thananut
    Horanont, Teerayut
    Huynh, Van-Nam
    Supnithi, Thepchai
    IEEE ACCESS, 2021, 9 : 163325 - 163347