Optimizing Irrigation Efficiency using Deep Reinforcement Learning in the Field

被引:1
|
作者
Ding, Xianzhong [1 ]
Du, Wan [1 ]
机构
[1] UC Merced, Merced, CA 95343 USA
基金
美国国家科学基金会;
关键词
Irrigation control; reinforcement learning; monitoring; water resource optimization; experimentation; performance; INFILTRATION; FORMULATION; PREDICTION;
D O I
10.1145/3662182
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Agricultural irrigation is a significant contributor to freshwater consumption. However, the current irrigation systems used in the field are not efficient. They rely mainly on soil moisture sensors and the experience of growers but do not account for future soil moisture loss. Predicting soil moisture loss is challenging because it is influenced by numerous factors, including soil texture, weather conditions, and plant characteristics. This article proposes a solution to improve irrigation efficiency, which is called DRLIC (deep reinforcement learning for irrigation control). DRLIC is a sophisticated irrigation system that uses deep reinforcement learning (DRL) to optimize its performance. The system employs a neural network, known as the DRL control agent, which learns an optimal control policy that considers both the current soil moisture measurement and the future soil moisture loss. We introduce an irrigation reward function that enables our control agent to learn from previous experiences. However, there may be instances in which the output of our DRL control agent is unsafe, such as irrigating too much or too little. To avoid damaging the health of the plants, we implement a safety mechanism that employs a soil moisture predictor to estimate the performance of each action. If the predicted outcome is deemed unsafe, we perform a relatively conservative action instead. To demonstrate the real-world application of our approach, we develop an irrigation system that comprises sprinklers, sensing and control nodes, and a wireless network. We evaluate the performance of DRLIC by deploying it in a test-bed consisting of six almond trees. During a 15-day in-field experiment, we compare the water consumption of DRLIC with a widely used irrigation scheme. Our results indicate that DRLIC outperforms the traditional irrigation method by achieving water savings of up to 9.52%.
引用
收藏
页数:34
相关论文
共 50 条
  • [21] Optimizing Data Center Energy Efficiency via Event-Driven Deep Reinforcement Learning
    Ran, Yongyi
    Zhou, Xin
    Hu, Han
    Wen, Yonggang
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (02) : 1296 - 1309
  • [22] Assessing the value of deep reinforcement learning for irrigation scheduling
    Kelly, T. D.
    Foster, T.
    Schultz, David M.
    SMART AGRICULTURAL TECHNOLOGY, 2024, 7
  • [23] DEEP REINFORCEMENT LEARNING-BASED IRRIGATION SCHEDULING
    Yang, Y.
    Hu, J.
    Porter, D.
    Marek, T.
    Heflin, K.
    Kong, H.
    Sun, L.
    TRANSACTIONS OF THE ASABE, 2020, 63 (03) : 549 - 556
  • [24] Optimizing Automated Trading Systems with Deep Reinforcement Learning
    Tran, Minh
    Pham-Hi, Duc
    Bui, Marc
    ALGORITHMS, 2023, 16 (01)
  • [25] Optimizing ZX-diagrams with deep reinforcement learning
    Naegele, Maximilian
    Marquardt, Florian
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2024, 5 (03):
  • [26] Deep Reinforcement Learning for Optimizing Finance Portfolio Management
    Hu, Yuh-Jong
    Lin, Shang-Jen
    PROCEEDINGS 2019 AMITY INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AICAI), 2019, : 14 - 20
  • [27] Optimizing Sequential Experimental Design with Deep Reinforcement Learning
    Blau, Tom
    Bonilla, Edwin V.
    Chades, Iadine
    Dezfouli, Amir
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [29] Towards maximum efficiency in heat pump operation: Self-optimizing defrost initiation control using deep reinforcement learning
    Klingebiel, Jonas
    Salamon, Moritz
    Bogdanov, Plamen
    Venzik, Valerius
    Vering, Christian
    Mueller, Dirk
    ENERGY AND BUILDINGS, 2023, 297
  • [30] Deep reinforcement learning for irrigation scheduling using high-dimensional sensor feedback
    Saikai, Yuji
    Peake, Allan
    Chenu, Karine
    PLOS WATER, 2023, 2 (09):