Fish growth trajectory tracking using Q-learning in precision aquaculture

被引:16
作者
Chahid, Abderrazak [1 ]
N'Doye, Ibrahima [1 ]
Majoris, John E. [2 ]
Berumen, Michael L. [2 ]
Laleg-Kirati, Taous-Meriem [1 ]
机构
[1] King Abdullah Univ Sci & Technol KAUST, Elect & Math Sci & Engn Div CEMSE, Thuwal 239556900, Makkah Province, Saudi Arabia
[2] King Abdullah Univ Sci & Technol KAUST, Red Sea Res Ctr, Biol & Environm Sci & Engn Div, Thuwal 239556900, Makkah Province, Saudi Arabia
关键词
Fish growth model; Reference trajectory tracking; Markov decision process; Process control; Q-learning; Reinforcement learning; TILAPIA OREOCHROMIS-NILOTICUS; MODEL; OUTPUT; POND;
D O I
10.1016/j.aquaculture.2021.737838
中图分类号
S9 [水产、渔业];
学科分类号
0908 ;
摘要
This paper studies fish growth trajectory tracking using Q-learning under a representative bioenergetic growth model of Nile tilapia (Oreochromis niloticus). The fish growth rate varies in practice and cannot be easily estimated due to the complex aquaculture condition and variable environmental factors. Additionally, the growth trajectory tracking problem is challenging to solve by most of the model-based control approaches due to the nonlinear couplings and interactions between multi-inputs such as temperature, dissolved oxygen, un-ionized ammonia, and the model uncertainty of the fish growth system. We formulate the growth trajectory tracking problem as sampled-data optimal control using discrete state-action pairs Markov decision process on the simulated growth trajectories data to mimic the real aquaculture environment adequately. We propose two Q-learning algorithms that learn the optimal control policy from the simulated data of the fish growth trajectories beginning from the juvenile stage until the desired market weight in the aquaculture environment. The first Q-learning scheme learns the optimal feeding control policy to fish growth rate cultured in cages, while the second one online updates the optimal feeding control policy within an optimal temperature profile for the aquaculture fish growth rate in tanks. The simulation results demonstrate that both Q-learning control strategies achieve good trajectory tracking performance with lower feeding rates and help compensate for the environmental changes of the manipulated variables and the bioenergetic model uncertainties of fish growth in the aquaculture environment. The proposed Q-learning control policies achieve 1.7% and 6.6% relative trajectory tracking errors of the average total weight of fish from both tanks on land and floating cages, respectively. Furthermore, the feeding and temperature control policies reduce 11% relative feeding quantity of the food waste in tanks on land compared to the floating cages where the water temperature is maintained at the ambient temperature of 29.7 degrees C.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Autonomous Driving in Roundabout Maneuvers Using Reinforcement Learning with Q-Learning
    Garcia Cuenca, Laura
    Puertas, Enrique
    Fernandez Andres, Javier
    Aliane, Nourdine
    ELECTRONICS, 2019, 8 (12)
  • [22] LONG AND SHORT MEMORY BALANCING IN VISUAL CO-TRACKING USING Q-LEARNING
    Meshgi, Kourosh
    Mirzaei, Maryam Sadat
    Oba, Shigeyuki
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3970 - 3974
  • [23] Improving the Performance of Q-learning Using Simultanouse Q-values Updating
    Pouyan, Maryam
    Mousavi, Amin
    Golzari, Shahram
    Hatam, Ahmad
    2014 INTERNATIONAL CONGRESS ON TECHNOLOGY, COMMUNICATION AND KNOWLEDGE (ICTCK), 2014,
  • [24] Inter-carrier SLA negotiation using Q-learning
    Pouyllau, Helia
    Carofiglio, Giovanna
    TELECOMMUNICATION SYSTEMS, 2013, 52 (02) : 611 - 622
  • [25] Collaborative Traffic Signal Automation Using Deep Q-Learning
    Hassan, Muhammad Ahmed
    Elhadef, Mourad
    Khan, Muhammad Usman Ghani
    IEEE ACCESS, 2023, 11 : 136015 - 136032
  • [26] Channel BlaQLisT: Channel Blacklist using Q-Learning for TSCH
    Kim, JunMyeung
    Chung, Sang-Hwa
    2022 THIRTEENTH INTERNATIONAL CONFERENCE ON UBIQUITOUS AND FUTURE NETWORKS (ICUFN), 2022, : 134 - 139
  • [27] Improving Q-learning by using the agent's action history
    Saito M.
    Sekozawa T.
    2016, Institute of Electrical Engineers of Japan (136) : 1209 - 1217
  • [28] Inter-carrier SLA negotiation using Q-learning
    Hélia Pouyllau
    Giovanna Carofiglio
    Telecommunication Systems, 2013, 52 : 611 - 622
  • [29] Self-improvement of OPAmp parameters using Q-Learning
    Takai, Nobukazu
    Fukuda, Masafumi
    Saruta, Masahiro
    2019 16TH INTERNATIONAL CONFERENCE ON SYNTHESIS, MODELING, ANALYSIS AND SIMULATION METHODS AND APPLICATIONS TO CIRCUIT DESIGN (SMACD 2019), 2019, : 293 - 296
  • [30] On Improving the Properties of Random Walk on Graph using Q-learning
    Matsuo, Ryotaro
    Miyashita, Tomoyuki
    Suzuki, Taisei
    Ohsaki, Hiroyuki
    IEICE COMMUNICATIONS EXPRESS, 2023, 12 (01): : 36 - 41