Continual Reinforcement Learning for Intelligent Agricultural Management under Climate Changes

被引:1
作者
Wang, Zhaoan [1 ]
Jha, Kishlay [2 ]
Xiao, Shaoping [1 ]
机构
[1] Univ Iowa, Iowa Technol Inst, Dept Mech Engn, Iowa City, IA 52242 USA
[2] Univ Iowa, Dept Elect & Comp Engn, Iowa City, IA 52242 USA
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2024年 / 81卷 / 01期
基金
美国国家科学基金会;
关键词
Continual learning; reinforcement learning; agricultural management; climate variability; NEURAL-NETWORKS;
D O I
10.32604/cmc.2024.055809
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Climate change poses significant challenges to agricultural management, particularly in adapting to extreme weather conditions that impact agricultural production. Existing works with traditional Reinforcement Learning (RL) methods often falter under such extreme conditions. To address this challenge, our study introduces a novel approach by integrating Continual Learning (CL) with RL to form Continual Reinforcement Learning (CRL), enhancing the adaptability of agricultural management strategies. Leveraging the Gym-DSSAT simulation environment, our research enables RL agents to learn optimal fertilization strategies based on variable weather conditions. By incorporating CL algorithms, such as Elastic Weight Consolidation (EWC), with established RL techniques like Deep Q-Networks (DQN), we developed a framework in which agents can learn and retain knowledge across diverse weather scenarios. The CRL approach was tested under climate variability to assess the robustness and adaptability of the induced policies, particularly under extreme weather events like severe droughts. Our results showed that continually learned policies exhibited superior adaptability and performance compared to optimal policies learned through the conventional RL methods, especially in challenging conditions of reduced rainfall and increased temperatures. This pioneering work, which combines CL with RL to generate adaptive policies for agricultural management, is expected to make significant advancements in precision agriculture in the era of climate change.
引用
收藏
页码:1319 / 1336
页数:18
相关论文
共 35 条
[1]  
Abel D, 2023, NeurIPS, V36, P50377
[2]   Safe reinforcement learning under temporal logic with reward design and quantum action selection [J].
Cai, Mingyu ;
Xiao, Shaoping ;
Li, Junchao ;
Kan, Zhen .
SCIENTIFIC REPORTS, 2023, 13 (01)
[3]  
Cho K, 2014, P 2014 C EMPIRICAL M, P1724, DOI [DOI 10.3115/V1/D14-1179, 10.3115/v1/d14-1179]
[4]   REGULARIZATION THEORY AND NEURAL NETWORKS ARCHITECTURES [J].
GIROSI, F ;
JONES, M ;
POGGIO, T .
NEURAL COMPUTATION, 1995, 7 (02) :219-269
[5]  
Gupta A., 2019, arXiv, DOI 10.48550/arXiv.1910.11956
[6]   The DSSAT cropping system model [J].
Jones, JW ;
Hoogenboom, G ;
Porter, CH ;
Boote, KJ ;
Batchelor, WD ;
Hunt, LA ;
Wilkens, PW ;
Singh, U ;
Gijsman, AJ ;
Ritchie, JT .
EUROPEAN JOURNAL OF AGRONOMY, 2003, 18 (3-4) :235-265
[7]   Reinforcement learning: A survey [J].
Kaelbling, LP ;
Littman, ML ;
Moore, AW .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 1996, 4 :237-285
[8]  
Kim J, 2024, Arxiv, DOI arXiv:2403.05346
[9]   Overcoming catastrophic forgetting in neural networks [J].
Kirkpatricka, James ;
Pascanu, Razvan ;
Rabinowitz, Neil ;
Veness, Joel ;
Desjardins, Guillaume ;
Rusu, Andrei A. ;
Milan, Kieran ;
Quan, John ;
Ramalho, Tiago ;
Grabska-Barwinska, Agnieszka ;
Hassabis, Demis ;
Clopath, Claudia ;
Kumaran, Dharshan ;
Hadsell, Raia .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (13) :3521-3526
[10]  
Lee SW, 2017, ADV NEUR IN, V30